status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,710 |
Unexpected Exception ... 'Block' object has no attribute 'get_search_path'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I get an `ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'get_search_path'`. It seems to require both an 'apply' on include_tasks and the `playbook_vars_root` configuration option set to trigger the exception.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Core
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = ./ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible
executable location = /usr/local/virtualenv/py3/ansible_stable/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
PLAYBOOK_VARS_ROOT(./ansible.cfg) = all
##### OS / ENVIRONMENT
Debian 10 ansible latest version of ansible installed via pip into venv.
##### STEPS TO REPRODUCE
ansible-playbook example.yml -vvv 2>&1
ansible-playbook 2.9.2
config file = ./ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible
executable location = /usr/local/virtualenv/py3/ansible_stable/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
Using ./ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'
PLAYBOOK: example.yml *************************************************************************************************
1 plays in example.yml
PLAY [localhost] ******************************************************************************************************
META: ran handlers
TASK [ssh_client : No exception] **************************************************************************************
task path: ./roles/ssh_client/tasks/main.yml:2
included: ./roles/ssh_client/tasks/clean_known_hosts.yml for localhost
TASK [ssh_client : Read /etc/ssh/ssh_known_hosts] *********************************************************************
task path: ./roles/ssh_client/tasks/clean_known_hosts.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288 `" && echo ansible-tmp-1576004604.8780084-124655683624288="` echo /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288 `" ) && sleep 0'
Using module file /usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/modules/net_tools/basics/slurp.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-24449ljfccz75/tmpdopvwfyq TO /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/AnsiballZ_slurp.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/ /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/AnsiballZ_slurp.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/virtualenv/py3/ansible_stable/bin/python3 /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/AnsiballZ_slurp.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"src": "/etc/ssh/ssh_known_hosts"
}
},
"msg": "file not found: /etc/ssh/ssh_known_hosts"
}
...ignoring
TASK [ssh_client : debug] *********************************************************************************************
task path: ./roles/ssh_client/tasks/clean_known_hosts.yml:10
skipping: [localhost] => {}
TASK [ssh_client : Unexpected Exception] ******************************************************************************
task path: ./roles/ssh_client/tasks/main.yml:8
included: ./roles/ssh_client/tasks/clean_known_hosts.yml for localhost
ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'get_search_path'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/virtualenv/py3/ansible_stable/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 240, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/plugins/strategy/linear.py", line 367, in run
_hosts_all=self._hosts_cache_all,
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/vars/manager.py", line 195, in get_vars
basedirs = task.get_search_path()
AttributeError: 'Block' object has no attribute 'get_search_path'
./ansible.cfg
[defaults]
playbook_vars_root = all
./example.yml
---
- hosts: localhost
gather_facts: no
roles:
- ssh_client
./roles/ssh_client/tasks/main.yml
---
- name: No exception
include_tasks:
file: clean_known_hosts.yml
vars:
known_hosts_file: /etc/ssh/ssh_known_hosts
- name: Unexpected Exception
include_tasks:
file: clean_known_hosts.yml
apply:
become: yes
vars:
known_hosts_file: /etc/ssh/ssh_known_hosts
./roles/ssh_client/tasks/clean_known_hosts.yml
---
- name: Read {{ known_hosts_file }}
slurp:
src: "{{ known_hosts_file }}"
register: current_known_hosts
ignore_errors: yes
- when: current_known_hosts is not failed
block:
- debug:
##### EXPECTED RESULTS
No `ERROR! Unexpected Exception`
|
https://github.com/ansible/ansible/issues/65710
|
https://github.com/ansible/ansible/pull/72378
|
a51a6f4a259b45593c3f803737c6d5d847258a83
|
e73a0b2460b41c27fd22d286dd2f4407f69f12ed
| 2019-12-10T19:14:28Z |
python
| 2020-10-29T19:15:18Z |
test/integration/targets/include_import/apply/include_apply_65710.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,710 |
Unexpected Exception ... 'Block' object has no attribute 'get_search_path'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I get an `ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'get_search_path'`. It seems to require both an 'apply' on include_tasks and the `playbook_vars_root` configuration option set to trigger the exception.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Core
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = ./ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible
executable location = /usr/local/virtualenv/py3/ansible_stable/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
PLAYBOOK_VARS_ROOT(./ansible.cfg) = all
##### OS / ENVIRONMENT
Debian 10 ansible latest version of ansible installed via pip into venv.
##### STEPS TO REPRODUCE
ansible-playbook example.yml -vvv 2>&1
ansible-playbook 2.9.2
config file = ./ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible
executable location = /usr/local/virtualenv/py3/ansible_stable/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
Using ./ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'
PLAYBOOK: example.yml *************************************************************************************************
1 plays in example.yml
PLAY [localhost] ******************************************************************************************************
META: ran handlers
TASK [ssh_client : No exception] **************************************************************************************
task path: ./roles/ssh_client/tasks/main.yml:2
included: ./roles/ssh_client/tasks/clean_known_hosts.yml for localhost
TASK [ssh_client : Read /etc/ssh/ssh_known_hosts] *********************************************************************
task path: ./roles/ssh_client/tasks/clean_known_hosts.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288 `" && echo ansible-tmp-1576004604.8780084-124655683624288="` echo /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288 `" ) && sleep 0'
Using module file /usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/modules/net_tools/basics/slurp.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-24449ljfccz75/tmpdopvwfyq TO /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/AnsiballZ_slurp.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/ /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/AnsiballZ_slurp.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/virtualenv/py3/ansible_stable/bin/python3 /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/AnsiballZ_slurp.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1576004604.8780084-124655683624288/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"src": "/etc/ssh/ssh_known_hosts"
}
},
"msg": "file not found: /etc/ssh/ssh_known_hosts"
}
...ignoring
TASK [ssh_client : debug] *********************************************************************************************
task path: ./roles/ssh_client/tasks/clean_known_hosts.yml:10
skipping: [localhost] => {}
TASK [ssh_client : Unexpected Exception] ******************************************************************************
task path: ./roles/ssh_client/tasks/main.yml:8
included: ./roles/ssh_client/tasks/clean_known_hosts.yml for localhost
ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'get_search_path'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/virtualenv/py3/ansible_stable/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 240, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/plugins/strategy/linear.py", line 367, in run
_hosts_all=self._hosts_cache_all,
File "/usr/local/virtualenv/py3/ansible_stable/lib/python3.7/site-packages/ansible/vars/manager.py", line 195, in get_vars
basedirs = task.get_search_path()
AttributeError: 'Block' object has no attribute 'get_search_path'
./ansible.cfg
[defaults]
playbook_vars_root = all
./example.yml
---
- hosts: localhost
gather_facts: no
roles:
- ssh_client
./roles/ssh_client/tasks/main.yml
---
- name: No exception
include_tasks:
file: clean_known_hosts.yml
vars:
known_hosts_file: /etc/ssh/ssh_known_hosts
- name: Unexpected Exception
include_tasks:
file: clean_known_hosts.yml
apply:
become: yes
vars:
known_hosts_file: /etc/ssh/ssh_known_hosts
./roles/ssh_client/tasks/clean_known_hosts.yml
---
- name: Read {{ known_hosts_file }}
slurp:
src: "{{ known_hosts_file }}"
register: current_known_hosts
ignore_errors: yes
- when: current_known_hosts is not failed
block:
- debug:
##### EXPECTED RESULTS
No `ERROR! Unexpected Exception`
|
https://github.com/ansible/ansible/issues/65710
|
https://github.com/ansible/ansible/pull/72378
|
a51a6f4a259b45593c3f803737c6d5d847258a83
|
e73a0b2460b41c27fd22d286dd2f4407f69f12ed
| 2019-12-10T19:14:28Z |
python
| 2020-10-29T19:15:18Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(printf "%03d " {1..39}); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
test "$(ansible-playbook -i ../../inventory playbook/test_import_playbook.yml "$@" 2>&1 | grep -c '\[WARNING\]: Additional parameters in import_playbook')" = 1
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
# https://github.com/ansible/ansible/issues/68515
ansible-playbook -v role/test_include_role_vars_from.yml 2>&1 | tee test_include_role_vars_from.out
test "$(grep -E -c 'Expected a string for vars_from but got' test_include_role_vars_from.out)" = 1
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/64902
ansible-playbook tasks/test_allow_single_role_dup.yml 2>&1 | tee test_allow_single_role_dup.out
test "$(grep -c 'ok=3' test_allow_single_role_dup.out)" = 1
# https://github.com/ansible/ansible/issues/66764
ANSIBLE_HOST_PATTERN_MISMATCH=error ansible-playbook empty_group_warning/playbook.yml
ansible-playbook test_include_loop.yml "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,322 |
ZombieProcess stack trace in wait_for integration test on macOS
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `wait_for` integration test when run on macOS hosts in combination with some other tests results in a stack trace due to a zombie process. This appears in the `devel` nightlihtlies and occasionally in PRs. I am unable to duplicate this issue when running only the `wait_for` test, so it seems to be a test interaction issue. [Example test failure](https://app.shippable.com/github/ansible/ansible/runs/175420/17/tests).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/wait_for`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
Shippable
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the nightly integration test.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 363, in catch_zombie
yield
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
ProcessLookupError: [Errno 3] No such process (originated from proc_pidinfo())
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 140, in <module>
_ansiballz_main()
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 132, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 81, in invoke_module
runpy.run_module(mod_name='ansible.modules.wait_for', init_globals=None, run_name='__main__', alter_sys=True)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 667, in <module>
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 651, in main
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 295, in get_active_connections_count
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/__init__.py", line 1182, in connections
return self._proc.connections(kind)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 342, in wrapper
return fun(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 374, in catch_zombie
raise ZombieProcess(proc.pid, proc._name, proc._ppid)
psutil.ZombieProcess: psutil.ZombieProcess process still exists but it's a zombie (pid=23005)
```
|
https://github.com/ansible/ansible/issues/72322
|
https://github.com/ansible/ansible/pull/72401
|
e73a0b2460b41c27fd22d286dd2f4407f69f12ed
|
fb09fd2a2301db485f5e15f77b59c31e7bc1645a
| 2020-10-23T18:43:36Z |
python
| 2020-10-30T01:40:31Z |
changelogs/fragments/72322-wait-for-handle-errors.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,322 |
ZombieProcess stack trace in wait_for integration test on macOS
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `wait_for` integration test when run on macOS hosts in combination with some other tests results in a stack trace due to a zombie process. This appears in the `devel` nightlihtlies and occasionally in PRs. I am unable to duplicate this issue when running only the `wait_for` test, so it seems to be a test interaction issue. [Example test failure](https://app.shippable.com/github/ansible/ansible/runs/175420/17/tests).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/wait_for`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
Shippable
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the nightly integration test.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 363, in catch_zombie
yield
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
ProcessLookupError: [Errno 3] No such process (originated from proc_pidinfo())
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 140, in <module>
_ansiballz_main()
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 132, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 81, in invoke_module
runpy.run_module(mod_name='ansible.modules.wait_for', init_globals=None, run_name='__main__', alter_sys=True)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 667, in <module>
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 651, in main
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 295, in get_active_connections_count
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/__init__.py", line 1182, in connections
return self._proc.connections(kind)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 342, in wrapper
return fun(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 374, in catch_zombie
raise ZombieProcess(proc.pid, proc._name, proc._ppid)
psutil.ZombieProcess: psutil.ZombieProcess process still exists but it's a zombie (pid=23005)
```
|
https://github.com/ansible/ansible/issues/72322
|
https://github.com/ansible/ansible/pull/72401
|
e73a0b2460b41c27fd22d286dd2f4407f69f12ed
|
fb09fd2a2301db485f5e15f77b59c31e7bc1645a
| 2020-10-23T18:43:36Z |
python
| 2020-10-30T01:40:31Z |
lib/ansible/modules/wait_for.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jeroen Hoekx <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: wait_for
short_description: Waits for a condition before continuing
description:
- You can wait for a set amount of time C(timeout), this is the default if nothing is specified or just C(timeout) is specified.
This does not produce an error.
- Waiting for a port to become available is useful for when services are not immediately available after their init scripts return
which is true of certain Java application servers.
- It is also useful when starting guests with the M(community.libvirt.virt) module and needing to pause until they are ready.
- This module can also be used to wait for a regex match a string to be present in a file.
- In Ansible 1.6 and later, this module can also be used to wait for a file to be available or
absent on the filesystem.
- In Ansible 1.8 and later, this module can also be used to wait for active connections to be closed before continuing, useful if a node
is being rotated out of a load balancer pool.
- For Windows targets, use the M(ansible.windows.win_wait_for) module instead.
version_added: "0.7"
options:
host:
description:
- A resolvable hostname or IP address to wait for.
type: str
default: 127.0.0.1
timeout:
description:
- Maximum number of seconds to wait for, when used with another condition it will force an error.
- When used without other conditions it is equivalent of just sleeping.
type: int
default: 300
connect_timeout:
description:
- Maximum number of seconds to wait for a connection to happen before closing and retrying.
type: int
default: 5
delay:
description:
- Number of seconds to wait before starting to poll.
type: int
default: 0
port:
description:
- Port number to poll.
- C(path) and C(port) are mutually exclusive parameters.
type: int
active_connection_states:
description:
- The list of TCP connection states which are counted as active connections.
type: list
elements: str
default: [ ESTABLISHED, FIN_WAIT1, FIN_WAIT2, SYN_RECV, SYN_SENT, TIME_WAIT ]
version_added: "2.3"
state:
description:
- Either C(present), C(started), or C(stopped), C(absent), or C(drained).
- When checking a port C(started) will ensure the port is open, C(stopped) will check that it is closed, C(drained) will check for active connections.
- When checking for a file or a search string C(present) or C(started) will ensure that the file or string is present before continuing,
C(absent) will check that file is absent or removed.
type: str
choices: [ absent, drained, present, started, stopped ]
default: started
path:
description:
- Path to a file on the filesystem that must exist before continuing.
- C(path) and C(port) are mutually exclusive parameters.
type: path
version_added: "1.4"
search_regex:
description:
- Can be used to match a string in either a file or a socket connection.
- Defaults to a multiline regex.
type: str
version_added: "1.4"
exclude_hosts:
description:
- List of hosts or IPs to ignore when looking for active TCP connections for C(drained) state.
type: list
elements: str
version_added: "1.8"
sleep:
description:
- Number of seconds to sleep between checks.
- Before Ansible 2.3 this was hardcoded to 1 second.
type: int
default: 1
version_added: "2.3"
msg:
description:
- This overrides the normal error message from a failure to meet the required conditions.
type: str
version_added: "2.4"
notes:
- The ability to use search_regex with a port connection was added in Ansible 1.7.
- Prior to Ansible 2.4, testing for the absence of a directory or UNIX socket did not work correctly.
- Prior to Ansible 2.4, testing for the presence of a file did not work correctly if the remote user did not have read access to that file.
- Under some circumstances when using mandatory access control, a path may always be treated as being absent even if it exists, but
can't be modified or created by the remote user either.
- When waiting for a path, symbolic links will be followed. Many other modules that manipulate files do not follow symbolic links,
so operations on the path using other modules may not work exactly as expected.
seealso:
- module: ansible.builtin.wait_for_connection
- module: ansible.windows.win_wait_for
- module: community.windows.win_wait_for_process
author:
- Jeroen Hoekx (@jhoekx)
- John Jarvis (@jarv)
- Andrii Radyk (@AnderEnder)
'''
EXAMPLES = r'''
- name: Sleep for 300 seconds and continue with play
wait_for:
timeout: 300
delegate_to: localhost
- name: Wait for port 8000 to become open on the host, don't start checking for 10 seconds
wait_for:
port: 8000
delay: 10
- name: Waits for port 8000 of any IP to close active connections, don't start checking for 10 seconds
wait_for:
host: 0.0.0.0
port: 8000
delay: 10
state: drained
- name: Wait for port 8000 of any IP to close active connections, ignoring connections for specified hosts
wait_for:
host: 0.0.0.0
port: 8000
state: drained
exclude_hosts: 10.2.1.2,10.2.1.3
- name: Wait until the file /tmp/foo is present before continuing
wait_for:
path: /tmp/foo
- name: Wait until the string "completed" is in the file /tmp/foo before continuing
wait_for:
path: /tmp/foo
search_regex: completed
- name: Wait until regex pattern matches in the file /tmp/foo and print the matched group
wait_for:
path: /tmp/foo
search_regex: completed (?P<task>\w+)
register: waitfor
- debug:
msg: Completed {{ waitfor['match_groupdict']['task'] }}
- name: Wait until the lock file is removed
wait_for:
path: /var/lock/file.lock
state: absent
- name: Wait until the process is finished and pid was destroyed
wait_for:
path: /proc/3466/status
state: absent
- name: Output customized message when failed
wait_for:
path: /tmp/foo
state: present
msg: Timeout to find file /tmp/foo
# Do not assume the inventory_hostname is resolvable and delay 10 seconds at start
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
connection: local
# Same as above but you normally have ansible_connection set in inventory, which overrides 'connection'
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
vars:
ansible_connection: local
'''
RETURN = r'''
elapsed:
description: The number of seconds that elapsed while waiting
returned: always
type: int
sample: 23
match_groups:
description: Tuple containing all the subgroups of the match as returned by U(https://docs.python.org/2/library/re.html#re.MatchObject.groups)
returned: always
type: list
sample: ['match 1', 'match 2']
match_groupdict:
description: Dictionary containing all the named subgroups of the match, keyed by the subgroup name,
as returned by U(https://docs.python.org/2/library/re.html#re.MatchObject.groupdict)
returned: always
type: dict
sample:
{
'group': 'match'
}
'''
import binascii
import datetime
import errno
import math
import os
import re
import select
import socket
import time
import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils._text import to_native
HAS_PSUTIL = False
PSUTIL_IMP_ERR = None
try:
import psutil
HAS_PSUTIL = True
# just because we can import it on Linux doesn't mean we will use it
except ImportError:
PSUTIL_IMP_ERR = traceback.format_exc()
class TCPConnectionInfo(object):
"""
This is a generic TCP Connection Info strategy class that relies
on the psutil module, which is not ideal for targets, but necessary
for cross platform support.
A subclass may wish to override some or all of these methods.
- _get_exclude_ips()
- get_active_connections()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
match_all_ips = {
socket.AF_INET: '0.0.0.0',
socket.AF_INET6: '::',
}
ipv4_mapped_ipv6_address = {
'prefix': '::ffff',
'match_all': '::ffff:0.0.0.0'
}
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(TCPConnectionInfo)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.ips = _convert_host_to_ip(module.params['host'])
self.port = int(self.module.params['port'])
self.exclude_ips = self._get_exclude_ips()
if not HAS_PSUTIL:
module.fail_json(msg=missing_required_lib('psutil'), exception=PSUTIL_IMP_ERR)
def _get_exclude_ips(self):
exclude_hosts = self.module.params['exclude_hosts']
exclude_ips = []
if exclude_hosts is not None:
for host in exclude_hosts:
exclude_ips.extend(_convert_host_to_ip(host))
return exclude_ips
def get_active_connections_count(self):
active_connections = 0
for p in psutil.process_iter():
if hasattr(p, 'get_connections'):
connections = p.get_connections(kind='inet')
else:
connections = p.connections(kind='inet')
for conn in connections:
if conn.status not in self.module.params['active_connection_states']:
continue
if hasattr(conn, 'local_address'):
(local_ip, local_port) = conn.local_address
else:
(local_ip, local_port) = conn.laddr
if self.port != local_port:
continue
if hasattr(conn, 'remote_address'):
(remote_ip, remote_port) = conn.remote_address
else:
(remote_ip, remote_port) = conn.raddr
if (conn.family, remote_ip) in self.exclude_ips:
continue
if any((
(conn.family, local_ip) in self.ips,
(conn.family, self.match_all_ips[conn.family]) in self.ips,
local_ip.startswith(self.ipv4_mapped_ipv6_address['prefix']) and
(conn.family, self.ipv4_mapped_ipv6_address['match_all']) in self.ips,
)):
active_connections += 1
return active_connections
# ===========================================
# Subclass: Linux
class LinuxTCPConnectionInfo(TCPConnectionInfo):
"""
This is a TCP Connection Info evaluation strategy class
that utilizes information from Linux's procfs. While less universal,
does allow Linux targets to not require an additional library.
"""
platform = 'Linux'
distribution = None
source_file = {
socket.AF_INET: '/proc/net/tcp',
socket.AF_INET6: '/proc/net/tcp6'
}
match_all_ips = {
socket.AF_INET: '00000000',
socket.AF_INET6: '00000000000000000000000000000000',
}
ipv4_mapped_ipv6_address = {
'prefix': '0000000000000000FFFF0000',
'match_all': '0000000000000000FFFF000000000000'
}
local_address_field = 1
remote_address_field = 2
connection_state_field = 3
def __init__(self, module):
self.module = module
self.ips = _convert_host_to_hex(module.params['host'])
self.port = "%0.4X" % int(module.params['port'])
self.exclude_ips = self._get_exclude_ips()
def _get_exclude_ips(self):
exclude_hosts = self.module.params['exclude_hosts']
exclude_ips = []
if exclude_hosts is not None:
for host in exclude_hosts:
exclude_ips.extend(_convert_host_to_hex(host))
return exclude_ips
def get_active_connections_count(self):
active_connections = 0
for family in self.source_file.keys():
if not os.path.isfile(self.source_file[family]):
continue
f = open(self.source_file[family])
for tcp_connection in f.readlines():
tcp_connection = tcp_connection.strip().split()
if tcp_connection[self.local_address_field] == 'local_address':
continue
if (tcp_connection[self.connection_state_field] not in
[get_connection_state_id(_connection_state) for _connection_state in self.module.params['active_connection_states']]):
continue
(local_ip, local_port) = tcp_connection[self.local_address_field].split(':')
if self.port != local_port:
continue
(remote_ip, remote_port) = tcp_connection[self.remote_address_field].split(':')
if (family, remote_ip) in self.exclude_ips:
continue
if any((
(family, local_ip) in self.ips,
(family, self.match_all_ips[family]) in self.ips,
local_ip.startswith(self.ipv4_mapped_ipv6_address['prefix']) and
(family, self.ipv4_mapped_ipv6_address['match_all']) in self.ips,
)):
active_connections += 1
f.close()
return active_connections
def _convert_host_to_ip(host):
"""
Perform forward DNS resolution on host, IP will give the same IP
Args:
host: String with either hostname, IPv4, or IPv6 address
Returns:
List of tuples containing address family and IP
"""
addrinfo = socket.getaddrinfo(host, 80, 0, 0, socket.SOL_TCP)
ips = []
for family, socktype, proto, canonname, sockaddr in addrinfo:
ip = sockaddr[0]
ips.append((family, ip))
if family == socket.AF_INET:
ips.append((socket.AF_INET6, "::ffff:" + ip))
return ips
def _convert_host_to_hex(host):
"""
Convert the provided host to the format in /proc/net/tcp*
/proc/net/tcp uses little-endian four byte hex for ipv4
/proc/net/tcp6 uses little-endian per 4B word for ipv6
Args:
host: String with either hostname, IPv4, or IPv6 address
Returns:
List of tuples containing address family and the
little-endian converted host
"""
ips = []
if host is not None:
for family, ip in _convert_host_to_ip(host):
hexip_nf = binascii.b2a_hex(socket.inet_pton(family, ip))
hexip_hf = ""
for i in range(0, len(hexip_nf), 8):
ipgroup_nf = hexip_nf[i:i + 8]
ipgroup_hf = socket.ntohl(int(ipgroup_nf, base=16))
hexip_hf = "%s%08X" % (hexip_hf, ipgroup_hf)
ips.append((family, hexip_hf))
return ips
def _timedelta_total_seconds(timedelta):
return (
timedelta.microseconds + 0.0 +
(timedelta.seconds + timedelta.days * 24 * 3600) * 10 ** 6) / 10 ** 6
def get_connection_state_id(state):
connection_state_id = {
'ESTABLISHED': '01',
'SYN_SENT': '02',
'SYN_RECV': '03',
'FIN_WAIT1': '04',
'FIN_WAIT2': '05',
'TIME_WAIT': '06',
}
return connection_state_id[state]
def main():
module = AnsibleModule(
argument_spec=dict(
host=dict(type='str', default='127.0.0.1'),
timeout=dict(type='int', default=300),
connect_timeout=dict(type='int', default=5),
delay=dict(type='int', default=0),
port=dict(type='int'),
active_connection_states=dict(type='list', elements='str', default=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT']),
path=dict(type='path'),
search_regex=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'drained', 'present', 'started', 'stopped']),
exclude_hosts=dict(type='list', elements='str'),
sleep=dict(type='int', default=1),
msg=dict(type='str'),
),
)
host = module.params['host']
timeout = module.params['timeout']
connect_timeout = module.params['connect_timeout']
delay = module.params['delay']
port = module.params['port']
state = module.params['state']
path = module.params['path']
search_regex = module.params['search_regex']
msg = module.params['msg']
if search_regex is not None:
compiled_search_re = re.compile(search_regex, re.MULTILINE)
else:
compiled_search_re = None
match_groupdict = {}
match_groups = ()
if port and path:
module.fail_json(msg="port and path parameter can not both be passed to wait_for", elapsed=0)
if path and state == 'stopped':
module.fail_json(msg="state=stopped should only be used for checking a port in the wait_for module", elapsed=0)
if path and state == 'drained':
module.fail_json(msg="state=drained should only be used for checking a port in the wait_for module", elapsed=0)
if module.params['exclude_hosts'] is not None and state != 'drained':
module.fail_json(msg="exclude_hosts should only be with state=drained", elapsed=0)
for _connection_state in module.params['active_connection_states']:
try:
get_connection_state_id(_connection_state)
except Exception:
module.fail_json(msg="unknown active_connection_state (%s) defined" % _connection_state, elapsed=0)
start = datetime.datetime.utcnow()
if delay:
time.sleep(delay)
if not port and not path and state != 'drained':
time.sleep(timeout)
elif state in ['absent', 'stopped']:
# first wait for the stop condition
end = start + datetime.timedelta(seconds=timeout)
while datetime.datetime.utcnow() < end:
if path:
try:
if not os.access(path, os.F_OK):
break
except IOError:
break
elif port:
try:
s = socket.create_connection((host, port), connect_timeout)
s.shutdown(socket.SHUT_RDWR)
s.close()
except Exception:
break
# Conditions not yet met, wait and try again
time.sleep(module.params['sleep'])
else:
elapsed = datetime.datetime.utcnow() - start
if port:
module.fail_json(msg=msg or "Timeout when waiting for %s:%s to stop." % (host, port), elapsed=elapsed.seconds)
elif path:
module.fail_json(msg=msg or "Timeout when waiting for %s to be absent." % (path), elapsed=elapsed.seconds)
elif state in ['started', 'present']:
# wait for start condition
end = start + datetime.timedelta(seconds=timeout)
while datetime.datetime.utcnow() < end:
if path:
try:
os.stat(path)
except OSError as e:
# If anything except file not present, throw an error
if e.errno != 2:
elapsed = datetime.datetime.utcnow() - start
module.fail_json(msg=msg or "Failed to stat %s, %s" % (path, e.strerror), elapsed=elapsed.seconds)
# file doesn't exist yet, so continue
else:
# File exists. Are there additional things to check?
if not compiled_search_re:
# nope, succeed!
break
try:
f = open(path)
try:
search = re.search(compiled_search_re, f.read())
if search:
if search.groupdict():
match_groupdict = search.groupdict()
if search.groups():
match_groups = search.groups()
break
finally:
f.close()
except IOError:
pass
elif port:
alt_connect_timeout = math.ceil(_timedelta_total_seconds(end - datetime.datetime.utcnow()))
try:
s = socket.create_connection((host, port), min(connect_timeout, alt_connect_timeout))
except Exception:
# Failed to connect by connect_timeout. wait and try again
pass
else:
# Connected -- are there additional conditions?
if compiled_search_re:
data = ''
matched = False
while datetime.datetime.utcnow() < end:
max_timeout = math.ceil(_timedelta_total_seconds(end - datetime.datetime.utcnow()))
(readable, w, e) = select.select([s], [], [], max_timeout)
if not readable:
# No new data. Probably means our timeout
# expired
continue
response = s.recv(1024)
if not response:
# Server shutdown
break
data += to_native(response, errors='surrogate_or_strict')
if re.search(compiled_search_re, data):
matched = True
break
# Shutdown the client socket
try:
s.shutdown(socket.SHUT_RDWR)
except socket.error as e:
if e.errno != errno.ENOTCONN:
raise
# else, the server broke the connection on its end, assume it's not ready
else:
s.close()
if matched:
# Found our string, success!
break
else:
# Connection established, success!
try:
s.shutdown(socket.SHUT_RDWR)
except socket.error as e:
if e.errno != errno.ENOTCONN:
raise
# else, the server broke the connection on its end, assume it's not ready
else:
s.close()
break
# Conditions not yet met, wait and try again
time.sleep(module.params['sleep'])
else: # while-else
# Timeout expired
elapsed = datetime.datetime.utcnow() - start
if port:
if search_regex:
module.fail_json(msg=msg or "Timeout when waiting for search string %s in %s:%s" % (search_regex, host, port), elapsed=elapsed.seconds)
else:
module.fail_json(msg=msg or "Timeout when waiting for %s:%s" % (host, port), elapsed=elapsed.seconds)
elif path:
if search_regex:
module.fail_json(msg=msg or "Timeout when waiting for search string %s in %s" % (search_regex, path), elapsed=elapsed.seconds)
else:
module.fail_json(msg=msg or "Timeout when waiting for file %s" % (path), elapsed=elapsed.seconds)
elif state == 'drained':
# wait until all active connections are gone
end = start + datetime.timedelta(seconds=timeout)
tcpconns = TCPConnectionInfo(module)
while datetime.datetime.utcnow() < end:
try:
if tcpconns.get_active_connections_count() == 0:
break
except IOError:
pass
# Conditions not yet met, wait and try again
time.sleep(module.params['sleep'])
else:
elapsed = datetime.datetime.utcnow() - start
module.fail_json(msg=msg or "Timeout when waiting for %s:%s to drain" % (host, port), elapsed=elapsed.seconds)
elapsed = datetime.datetime.utcnow() - start
module.exit_json(state=state, port=port, search_regex=search_regex, match_groups=match_groups, match_groupdict=match_groupdict, path=path,
elapsed=elapsed.seconds)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,322 |
ZombieProcess stack trace in wait_for integration test on macOS
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `wait_for` integration test when run on macOS hosts in combination with some other tests results in a stack trace due to a zombie process. This appears in the `devel` nightlihtlies and occasionally in PRs. I am unable to duplicate this issue when running only the `wait_for` test, so it seems to be a test interaction issue. [Example test failure](https://app.shippable.com/github/ansible/ansible/runs/175420/17/tests).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/wait_for`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
Shippable
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the nightly integration test.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 363, in catch_zombie
yield
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
ProcessLookupError: [Errno 3] No such process (originated from proc_pidinfo())
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 140, in <module>
_ansiballz_main()
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 132, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 81, in invoke_module
runpy.run_module(mod_name='ansible.modules.wait_for', init_globals=None, run_name='__main__', alter_sys=True)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 667, in <module>
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 651, in main
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 295, in get_active_connections_count
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/__init__.py", line 1182, in connections
return self._proc.connections(kind)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 342, in wrapper
return fun(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 374, in catch_zombie
raise ZombieProcess(proc.pid, proc._name, proc._ppid)
psutil.ZombieProcess: psutil.ZombieProcess process still exists but it's a zombie (pid=23005)
```
|
https://github.com/ansible/ansible/issues/72322
|
https://github.com/ansible/ansible/pull/72401
|
e73a0b2460b41c27fd22d286dd2f4407f69f12ed
|
fb09fd2a2301db485f5e15f77b59c31e7bc1645a
| 2020-10-23T18:43:36Z |
python
| 2020-10-30T01:40:31Z |
test/integration/targets/wait_for/files/zombie.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,322 |
ZombieProcess stack trace in wait_for integration test on macOS
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `wait_for` integration test when run on macOS hosts in combination with some other tests results in a stack trace due to a zombie process. This appears in the `devel` nightlihtlies and occasionally in PRs. I am unable to duplicate this issue when running only the `wait_for` test, so it seems to be a test interaction issue. [Example test failure](https://app.shippable.com/github/ansible/ansible/runs/175420/17/tests).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/wait_for`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
Shippable
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the nightly integration test.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 363, in catch_zombie
yield
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
ProcessLookupError: [Errno 3] No such process (originated from proc_pidinfo())
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 140, in <module>
_ansiballz_main()
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 132, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/var/root/.ansible/tmp/ansible-tmp-1603438181.978554-24193-108782891618547/AnsiballZ_wait_for.py", line 81, in invoke_module
runpy.run_module(mod_name='ansible.modules.wait_for', init_globals=None, run_name='__main__', alter_sys=True)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 667, in <module>
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 651, in main
File "/tmp/ansible_wait_for_payload_dp2d351v/ansible_wait_for_payload.zip/ansible/modules/wait_for.py", line 295, in get_active_connections_count
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/__init__.py", line 1182, in connections
return self._proc.connections(kind)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 342, in wrapper
return fun(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 532, in connections
rawlist = cext.proc_connections(self.pid, families, types)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/psutil/_psosx.py", line 374, in catch_zombie
raise ZombieProcess(proc.pid, proc._name, proc._ppid)
psutil.ZombieProcess: psutil.ZombieProcess process still exists but it's a zombie (pid=23005)
```
|
https://github.com/ansible/ansible/issues/72322
|
https://github.com/ansible/ansible/pull/72401
|
e73a0b2460b41c27fd22d286dd2f4407f69f12ed
|
fb09fd2a2301db485f5e15f77b59c31e7bc1645a
| 2020-10-23T18:43:36Z |
python
| 2020-10-30T01:40:31Z |
test/integration/targets/wait_for/tasks/main.yml
|
---
- name: test wait_for with delegate_to
wait_for:
timeout: 2
delegate_to: localhost
register: waitfor
- assert:
that:
- waitfor is successful
- waitfor.elapsed >= 2
- name: setup create a directory to serve files from
file:
dest: "{{ files_dir }}"
state: directory
- name: setup webserver
copy:
src: "testserver.py"
dest: "{{ output_dir }}/testserver.py"
- name: setup a path
file:
path: "{{ output_dir }}/wait_for_file"
state: touch
- name: setup remove a file after 3s
shell: sleep 3 && rm {{ output_dir }}/wait_for_file
async: 20
poll: 0
- name: test for absent path
wait_for:
path: "{{ output_dir }}/wait_for_file"
state: absent
timeout: 20
register: waitfor
- name: verify test for absent path
assert:
that:
- waitfor is successful
- waitfor.path == "{{ output_dir | expanduser }}/wait_for_file"
- waitfor.elapsed >= 2
- waitfor.elapsed <= 15
- name: setup create a file after 3s
shell: sleep 3 && touch {{ output_dir }}/wait_for_file
async: 20
poll: 0
- name: test for present path
wait_for:
path: "{{ output_dir }}/wait_for_file"
timeout: 5
register: waitfor
- name: verify test for absent path
assert:
that:
- waitfor is successful
- waitfor.path == "{{ output_dir | expanduser }}/wait_for_file"
- waitfor.elapsed >= 2
- waitfor.elapsed <= 15
- name: setup write keyword to file after 3s
shell: sleep 3 && echo completed > {{output_dir}}/wait_for_keyword
async: 20
poll: 0
- name: test wait for keyword in file
wait_for:
path: "{{output_dir}}/wait_for_keyword"
search_regex: completed
timeout: 5
register: waitfor
- name: verify test wait for keyword in file
assert:
that:
- waitfor is successful
- "waitfor.search_regex == 'completed'"
- waitfor.elapsed >= 2
- waitfor.elapsed <= 15
- name: setup write keyword to file after 3s
shell: sleep 3 && echo "completed data 123" > {{output_dir}}/wait_for_keyword
async: 20
poll: 0
- name: test wait for keyword in file with match groups
wait_for:
path: "{{output_dir}}/wait_for_keyword"
search_regex: completed (?P<foo>\w+) ([0-9]+)
timeout: 5
register: waitfor
- name: verify test wait for keyword in file with match groups
assert:
that:
- waitfor is successful
- waitfor.elapsed >= 2
- waitfor.elapsed <= 15
- waitfor['match_groupdict'] | length == 1
- waitfor['match_groupdict']['foo'] == 'data'
- waitfor['match_groups'] == ['data', '123']
- name: test wait for port timeout
wait_for:
port: 12121
timeout: 3
register: waitfor
ignore_errors: true
- name: verify test wait for port timeout
assert:
that:
- waitfor is failed
- waitfor.elapsed == 3
- "waitfor.msg == 'Timeout when waiting for 127.0.0.1:12121'"
- name: test fail with custom msg
wait_for:
port: 12121
msg: fail with custom message
timeout: 3
register: waitfor
ignore_errors: true
- name: verify test fail with custom msg
assert:
that:
- waitfor is failed
- waitfor.elapsed == 3
- "waitfor.msg == 'fail with custom message'"
- name: setup start SimpleHTTPServer
shell: sleep 3 && cd {{ files_dir }} && {{ ansible_python.executable }} {{ output_dir}}/testserver.py {{ http_port }}
async: 120 # this test set can take ~1m to run on FreeBSD (via Shippable)
poll: 0
- name: test wait for port with sleep
wait_for:
port: "{{ http_port }}"
sleep: 3
register: waitfor
- name: verify test wait for port sleep
assert:
that:
- waitfor is successful
- waitfor is not changed
- "waitfor.port == {{ http_port }}"
- name: install psutil using pip (non-Linux only)
pip:
name: psutil
when: ansible_system != 'Linux'
- name: test wait for port drained
wait_for:
port: "{{ http_port }}"
state: drained
register: waitfor
- name: verify test wait for port
assert:
that:
- waitfor is successful
- waitfor is not changed
- "waitfor.port == {{ http_port }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,431 |
ansible-test integration on shippable is no longer reporting any test data
|
##### SUMMARY
Both in the overview (see f.ex. https://app.shippable.com/github/ansible/ansible/runs/175858/summary/console) nor in the detail view for failed tests (https://app.shippable.com/github/ansible/ansible/runs/175858/82/tests) anything is shown.
This is only the case for ansible-test for `devel` when using the `integration` subcommand. `sanity` and `units` work fine.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
|
https://github.com/ansible/ansible/issues/72431
|
https://github.com/ansible/ansible/pull/72441
|
ccc63abc8ed8b7f7a3e5be436ccde57239a58a1d
|
6b30efa454916341d466778aa358a902f227e401
| 2020-11-01T20:30:19Z |
python
| 2020-11-02T23:04:41Z |
test/integration/targets/collections/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_COLLECTIONS_PATH=$PWD/collection_root_user:$PWD/collection_root_sys
export ANSIBLE_GATHERING=explicit
export ANSIBLE_GATHER_SUBSET=minimal
export ANSIBLE_HOST_PATTERN_MISMATCH=error
export ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH=0
# FUTURE: just use INVENTORY_PATH as-is once ansible-test sets the right dir
ipath=../../$(basename "${INVENTORY_PATH:-../../inventory}")
export INVENTORY_PATH="$ipath"
echo "--- validating callbacks"
# validate FQ callbacks in ansible-playbook
ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible-playbook noop.yml | grep "usercallback says ok"
# use adhoc for the rest of these tests, must force it to load other callbacks
export ANSIBLE_LOAD_CALLBACK_PLUGINS=1
# validate redirected callback
ANSIBLE_CALLBACKS_ENABLED=formerly_core_callback ansible localhost -m debug 2>&1 | grep -- "usercallback says ok"
## validate missing redirected callback
ANSIBLE_CALLBACKS_ENABLED=formerly_core_missing_callback ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'formerly_core_missing_callback'"
## validate redirected + removed callback (fatal)
ANSIBLE_CALLBACKS_ENABLED=formerly_core_removed_callback ansible localhost -m debug 2>&1 | grep -- "testns.testcoll.removedcallback has been removed"
# validate avoiding duplicate loading of callback, even if using diff names
[ "$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback,formerly_core_callback ansible localhost -m debug 2>&1 | grep -c 'usercallback says ok')" = "1" ]
# ensure non existing callback does not crash ansible
ANSIBLE_CALLBACKS_ENABLED=charlie.gomez.notme ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'charlie.gomez.notme'"
unset ANSIBLE_LOAD_CALLBACK_PLUGINS
# adhoc normally shouldn't load non-default plugins- let's be sure
output=$(ANSIBLE_CALLBACK_ENABLED=testns.testcoll.usercallback ansible localhost -m debug)
if [[ "${output}" =~ "usercallback says ok" ]]; then echo fail; exit 1; fi
echo "--- validating docs"
# test documentation
ansible-doc testns.testcoll.testmodule -vvv | grep -- "- normal_doc_frag"
# same with symlink
ln -s "${PWD}/testcoll2" ./collection_root_sys/ansible_collections/testns/testcoll2
ansible-doc testns.testcoll2.testmodule2 -vvv | grep "Test module"
# now test we can list with symlink
ansible-doc -l -vvv| grep "testns.testcoll2.testmodule2"
echo "testing bad doc_fragments (expected ERROR message follows)"
# test documentation failure
ansible-doc testns.testcoll.testmodule_bad_docfrags -vvv 2>&1 | grep -- "unknown doc_fragment"
echo "--- validating default collection"
# test adhoc default collection resolution (use unqualified collection module with playbook dir under its collection)
echo "testing adhoc default collection support with explicit playbook dir"
ANSIBLE_PLAYBOOK_DIR=./collection_root_user/ansible_collections/testns/testcoll ansible localhost -m testmodule
# we need multiple plays, and conditional import_playbook is noisy and causes problems, so choose here which one to use...
if [[ ${INVENTORY_PATH} == *.winrm ]]; then
export TEST_PLAYBOOK=windows.yml
else
export TEST_PLAYBOOK=posix.yml
echo "testing default collection support"
ansible-playbook -i "${INVENTORY_PATH}" collection_root_user/ansible_collections/testns/testcoll/playbooks/default_collection_playbook.yml "$@"
fi
echo "--- validating collections support in playbooks/roles"
# run test playbooks
ansible-playbook -i "${INVENTORY_PATH}" -v "${TEST_PLAYBOOK}" "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@"
fi
echo "--- validating bypass_host_loop with collection search"
ansible-playbook -i host1,host2, -v test_bypass_host_loop.yml "$@"
echo "--- validating inventory"
# test collection inventories
ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
# base invocation tests
ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@"
# run playbook from collection, test default again, but with FQCN
ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook.yml "$@"
# run playbook from collection, test default again, but with FQCN and no extension
ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook "$@"
# run playbook that imports from collection
ansible-playbook -i "${INVENTORY_PATH}" import_collection_pb.yml "$@"
fi
# test collection inventories
ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@"
# test adjacent with --playbook-dir
export ANSIBLE_COLLECTIONS_PATH=''
ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=1 ansible-inventory --list --export --playbook-dir=. -v "$@"
# use an inventory source with caching enabled
ansible-playbook -i a.statichost.yml -i ./cache.statichost.yml -v check_populated_inventory.yml
# Check that the inventory source with caching enabled was stored
if [[ "$(find ./inventory_cache -type f ! -path "./inventory_cache/.keep" | wc -l)" -ne "1" ]]; then
echo "Failed to find the expected single cache"
exit 1
fi
CACHEFILE="$(find ./inventory_cache -type f ! -path './inventory_cache/.keep')"
if [[ $CACHEFILE != ./inventory_cache/prefix_* ]]; then
echo "Unexpected cache file"
exit 1
fi
# Check the cache for the expected hosts
if [[ "$(grep -wc "cache_host_a" "$CACHEFILE")" -ne "1" ]]; then
echo "Failed to cache host as expected"
exit 1
fi
if [[ "$(grep -wc "dynamic_host_a" "$CACHEFILE")" -ne "0" ]]; then
echo "Cached an incorrect source"
exit 1
fi
./vars_plugin_tests.sh
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,431 |
ansible-test integration on shippable is no longer reporting any test data
|
##### SUMMARY
Both in the overview (see f.ex. https://app.shippable.com/github/ansible/ansible/runs/175858/summary/console) nor in the detail view for failed tests (https://app.shippable.com/github/ansible/ansible/runs/175858/82/tests) anything is shown.
This is only the case for ansible-test for `devel` when using the `integration` subcommand. `sanity` and `units` work fine.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
|
https://github.com/ansible/ansible/issues/72431
|
https://github.com/ansible/ansible/pull/72441
|
ccc63abc8ed8b7f7a3e5be436ccde57239a58a1d
|
6b30efa454916341d466778aa358a902f227e401
| 2020-11-01T20:30:19Z |
python
| 2020-11-02T23:04:41Z |
test/lib/ansible_test/_internal/executor.py
|
"""Execute Ansible tests."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import datetime
import re
import time
import textwrap
import functools
import hashlib
import difflib
import filecmp
import random
import string
import shutil
from . import types as t
from .thread import (
WrappedThread,
)
from .core_ci import (
AnsibleCoreCI,
SshKey,
)
from .manage_ci import (
ManageWindowsCI,
ManageNetworkCI,
)
from .cloud import (
cloud_filter,
cloud_init,
get_cloud_environment,
get_cloud_platforms,
CloudEnvironmentConfig,
)
from .io import (
make_dirs,
open_text_file,
read_binary_file,
read_text_file,
write_text_file,
)
from .util import (
ApplicationWarning,
ApplicationError,
SubprocessError,
display,
remove_tree,
find_executable,
raw_command,
get_available_port,
generate_pip_command,
find_python,
cmd_quote,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_DATA_ROOT,
ANSIBLE_TEST_CONFIG_ROOT,
get_ansible_version,
tempdir,
open_zipfile,
SUPPORTED_PYTHON_VERSIONS,
str_to_version,
)
from .util_common import (
get_docker_completion,
get_network_settings,
get_remote_completion,
get_python_path,
intercept_command,
named_temporary_file,
run_command,
write_json_test_results,
ResultType,
handle_layout_messages,
)
from .docker_util import (
docker_pull,
docker_run,
docker_available,
docker_rm,
get_docker_container_id,
get_docker_container_ip,
get_docker_hostname,
get_docker_preferred_network_name,
is_docker_user_defined_network,
)
from .ansible_util import (
ansible_environment,
check_pyyaml,
)
from .target import (
IntegrationTarget,
walk_internal_targets,
walk_posix_integration_targets,
walk_network_integration_targets,
walk_windows_integration_targets,
TIntegrationTarget,
)
from .ci import (
get_ci_provider,
)
from .classification import (
categorize_changes,
)
from .config import (
TestConfig,
EnvironmentConfig,
IntegrationConfig,
NetworkIntegrationConfig,
PosixIntegrationConfig,
ShellConfig,
WindowsIntegrationConfig,
TIntegrationConfig,
)
from .metadata import (
ChangeDescription,
)
from .integration import (
integration_test_environment,
integration_test_config_file,
setup_common_temp_dir,
get_inventory_relative_path,
check_inventory,
delegate_inventory,
)
from .data import (
data_context,
)
HTTPTESTER_HOSTS = (
'ansible.http.tests',
'sni1.ansible.http.tests',
'fail.ansible.http.tests',
)
def check_startup():
"""Checks to perform at startup before running commands."""
check_legacy_modules()
def check_legacy_modules():
"""Detect conflicts with legacy core/extras module directories to avoid problems later."""
for directory in 'core', 'extras':
path = 'lib/ansible/modules/%s' % directory
for root, _dir_names, file_names in os.walk(path):
if file_names:
# the directory shouldn't exist, but if it does, it must contain no files
raise ApplicationError('Files prohibited in "%s". '
'These are most likely legacy modules from version 2.2 or earlier.' % root)
def create_shell_command(command):
"""
:type command: list[str]
:rtype: list[str]
"""
optional_vars = (
'TERM',
)
cmd = ['/usr/bin/env']
cmd += ['%s=%s' % (var, os.environ[var]) for var in optional_vars if var in os.environ]
cmd += command
return cmd
def get_setuptools_version(args, python): # type: (EnvironmentConfig, str) -> t.Tuple[int]
"""Return the setuptools version for the given python."""
try:
return str_to_version(raw_command([python, '-c', 'import setuptools; print(setuptools.__version__)'], capture=True)[0])
except SubprocessError:
if args.explain:
return tuple() # ignore errors in explain mode in case setuptools is not aleady installed
raise
def get_cryptography_requirement(args, python_version): # type: (EnvironmentConfig, str) -> str
"""
Return the correct cryptography requirement for the given python version.
The version of cryptograpy installed depends on the python version and setuptools version.
"""
python = find_python(python_version)
setuptools_version = get_setuptools_version(args, python)
if setuptools_version >= (18, 5):
if python_version == '2.6':
# cryptography 2.2+ requires python 2.7+
# see https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst#22---2018-03-19
cryptography = 'cryptography < 2.2'
else:
cryptography = 'cryptography'
else:
# cryptography 2.1+ requires setuptools 18.5+
# see https://github.com/pyca/cryptography/blob/62287ae18383447585606b9d0765c0f1b8a9777c/setup.py#L26
cryptography = 'cryptography < 2.1'
return cryptography
def install_command_requirements(args, python_version=None, context=None, enable_pyyaml_check=False):
"""
:type args: EnvironmentConfig
:type python_version: str | None
:type context: str | None
:type enable_pyyaml_check: bool
"""
if not args.explain:
make_dirs(ResultType.COVERAGE.path)
make_dirs(ResultType.DATA.path)
if isinstance(args, ShellConfig):
if args.raw:
return
generate_egg_info(args)
if not args.requirements:
return
if isinstance(args, ShellConfig):
return
packages = []
if isinstance(args, TestConfig):
if args.coverage:
packages.append('coverage')
if args.junit:
packages.append('junit-xml')
if not python_version:
python_version = args.python_version
pip = generate_pip_command(find_python(python_version))
# skip packages which have aleady been installed for python_version
try:
package_cache = install_command_requirements.package_cache
except AttributeError:
package_cache = install_command_requirements.package_cache = {}
installed_packages = package_cache.setdefault(python_version, set())
skip_packages = [package for package in packages if package in installed_packages]
for package in skip_packages:
packages.remove(package)
installed_packages.update(packages)
if args.command != 'sanity':
install_ansible_test_requirements(args, pip)
# make sure setuptools is available before trying to install cryptography
# the installed version of setuptools affects the version of cryptography to install
run_command(args, generate_pip_install(pip, '', packages=['setuptools']))
# install the latest cryptography version that the current requirements can support
# use a custom constraints file to avoid the normal constraints file overriding the chosen version of cryptography
# if not installed here later install commands may try to install an unsupported version due to the presence of older setuptools
# this is done instead of upgrading setuptools to allow tests to function with older distribution provided versions of setuptools
run_command(args, generate_pip_install(pip, '',
packages=[get_cryptography_requirement(args, python_version)],
constraints=os.path.join(ANSIBLE_TEST_DATA_ROOT, 'cryptography-constraints.txt')))
commands = [generate_pip_install(pip, args.command, packages=packages, context=context)]
if isinstance(args, IntegrationConfig):
for cloud_platform in get_cloud_platforms(args):
commands.append(generate_pip_install(pip, '%s.cloud.%s' % (args.command, cloud_platform)))
commands = [cmd for cmd in commands if cmd]
if not commands:
return # no need to detect changes or run pip check since we are not making any changes
# only look for changes when more than one requirements file is needed
detect_pip_changes = len(commands) > 1
# first pass to install requirements, changes expected unless environment is already set up
install_ansible_test_requirements(args, pip)
changes = run_pip_commands(args, pip, commands, detect_pip_changes)
if changes:
# second pass to check for conflicts in requirements, changes are not expected here
changes = run_pip_commands(args, pip, commands, detect_pip_changes)
if changes:
raise ApplicationError('Conflicts detected in requirements. The following commands reported changes during verification:\n%s' %
'\n'.join((' '.join(cmd_quote(c) for c in cmd) for cmd in changes)))
if args.pip_check:
# ask pip to check for conflicts between installed packages
try:
run_command(args, pip + ['check', '--disable-pip-version-check'], capture=True)
except SubprocessError as ex:
if ex.stderr.strip() == 'ERROR: unknown command "check"':
display.warning('Cannot check pip requirements for conflicts because "pip check" is not supported.')
else:
raise
if enable_pyyaml_check:
# pyyaml may have been one of the requirements that was installed, so perform an optional check for it
check_pyyaml(args, python_version, required=False)
def install_ansible_test_requirements(args, pip): # type: (EnvironmentConfig, t.List[str]) -> None
"""Install requirements for ansible-test for the given pip if not already installed."""
try:
installed = install_command_requirements.installed
except AttributeError:
installed = install_command_requirements.installed = set()
if tuple(pip) in installed:
return
# make sure basic ansible-test requirements are met, including making sure that pip is recent enough to support constraints
# virtualenvs created by older distributions may include very old pip versions, such as those created in the centos6 test container (pip 6.0.8)
run_command(args, generate_pip_install(pip, 'ansible-test', use_constraints=False))
installed.add(tuple(pip))
def run_pip_commands(args, pip, commands, detect_pip_changes=False):
"""
:type args: EnvironmentConfig
:type pip: list[str]
:type commands: list[list[str]]
:type detect_pip_changes: bool
:rtype: list[list[str]]
"""
changes = []
after_list = pip_list(args, pip) if detect_pip_changes else None
for cmd in commands:
if not cmd:
continue
before_list = after_list
run_command(args, cmd)
after_list = pip_list(args, pip) if detect_pip_changes else None
if before_list != after_list:
changes.append(cmd)
return changes
def pip_list(args, pip):
"""
:type args: EnvironmentConfig
:type pip: list[str]
:rtype: str
"""
stdout = run_command(args, pip + ['list'], capture=True)[0]
return stdout
def generate_egg_info(args):
"""
:type args: EnvironmentConfig
"""
if args.explain:
return
ansible_version = get_ansible_version()
# inclusion of the version number in the path is optional
# see: https://setuptools.readthedocs.io/en/latest/formats.html#filename-embedded-metadata
egg_info_path = ANSIBLE_LIB_ROOT + '_base-%s.egg-info' % ansible_version
if os.path.exists(egg_info_path):
return
egg_info_path = ANSIBLE_LIB_ROOT + '_base.egg-info'
if os.path.exists(egg_info_path):
return
# minimal PKG-INFO stub following the format defined in PEP 241
# required for older setuptools versions to avoid a traceback when importing pkg_resources from packages like cryptography
# newer setuptools versions are happy with an empty directory
# including a stub here means we don't need to locate the existing file or have setup.py generate it when running from source
pkg_info = '''
Metadata-Version: 1.0
Name: ansible
Version: %s
Platform: UNKNOWN
Summary: Radically simple IT automation
Author-email: [email protected]
License: GPLv3+
''' % get_ansible_version()
pkg_info_path = os.path.join(egg_info_path, 'PKG-INFO')
write_text_file(pkg_info_path, pkg_info.lstrip(), create_directories=True)
def generate_pip_install(pip, command, packages=None, constraints=None, use_constraints=True, context=None):
"""
:type pip: list[str]
:type command: str
:type packages: list[str] | None
:type constraints: str | None
:type use_constraints: bool
:type context: str | None
:rtype: list[str] | None
"""
constraints = constraints or os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', 'constraints.txt')
requirements = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', '%s.txt' % ('%s.%s' % (command, context) if context else command))
content_constraints = None
options = []
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
if command == 'sanity' and data_context().content.is_ansible:
requirements = os.path.join(data_context().content.sanity_path, 'code-smell', '%s.requirements.txt' % context)
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
if command == 'units':
requirements = os.path.join(data_context().content.unit_path, 'requirements.txt')
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
content_constraints = os.path.join(data_context().content.unit_path, 'constraints.txt')
if command in ('integration', 'windows-integration', 'network-integration'):
requirements = os.path.join(data_context().content.integration_path, 'requirements.txt')
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
requirements = os.path.join(data_context().content.integration_path, '%s.requirements.txt' % command)
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
content_constraints = os.path.join(data_context().content.integration_path, 'constraints.txt')
if command.startswith('integration.cloud.'):
content_constraints = os.path.join(data_context().content.integration_path, 'constraints.txt')
if packages:
options += packages
if not options:
return None
if use_constraints:
if content_constraints and os.path.exists(content_constraints) and os.path.getsize(content_constraints):
# listing content constraints first gives them priority over constraints provided by ansible-test
options.extend(['-c', content_constraints])
options.extend(['-c', constraints])
return pip + ['install', '--disable-pip-version-check'] + options
def command_shell(args):
"""
:type args: ShellConfig
"""
if args.delegate:
raise Delegate()
install_command_requirements(args)
if args.inject_httptester:
inject_httptester(args)
cmd = create_shell_command(['bash', '-i'])
run_command(args, cmd)
def command_posix_integration(args):
"""
:type args: PosixIntegrationConfig
"""
handle_layout_messages(data_context().content.integration_messages)
inventory_relative_path = get_inventory_relative_path(args)
inventory_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, os.path.basename(inventory_relative_path))
all_targets = tuple(walk_posix_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets)
command_integration_filtered(args, internal_targets, all_targets, inventory_path)
def command_network_integration(args):
"""
:type args: NetworkIntegrationConfig
"""
handle_layout_messages(data_context().content.integration_messages)
inventory_relative_path = get_inventory_relative_path(args)
template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template'
if args.inventory:
inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.inventory)
else:
inventory_path = os.path.join(data_context().content.root, inventory_relative_path)
if args.no_temp_workdir:
# temporary solution to keep DCI tests working
inventory_exists = os.path.exists(inventory_path)
else:
inventory_exists = os.path.isfile(inventory_path)
if not args.explain and not args.platform and not inventory_exists:
raise ApplicationError(
'Inventory not found: %s\n'
'Use --inventory to specify the inventory path.\n'
'Use --platform to provision resources and generate an inventory file.\n'
'See also inventory template: %s' % (inventory_path, template_path)
)
check_inventory(args, inventory_path)
delegate_inventory(args, inventory_path)
all_targets = tuple(walk_network_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=network_init)
instances = [] # type: t.List[WrappedThread]
if args.platform:
get_python_path(args, args.python_executable) # initialize before starting threads
configs = dict((config['platform_version'], config) for config in args.metadata.instance_config)
for platform_version in args.platform:
platform, version = platform_version.split('/', 1)
config = configs.get(platform_version)
if not config:
continue
instance = WrappedThread(functools.partial(network_run, args, platform, version, config))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
remotes = [instance.wait_for_result() for instance in instances]
inventory = network_inventory(remotes)
display.info('>>> Inventory: %s\n%s' % (inventory_path, inventory.strip()), verbosity=3)
if not args.explain:
write_text_file(inventory_path, inventory)
success = False
try:
command_integration_filtered(args, internal_targets, all_targets, inventory_path)
success = True
finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
for instance in instances:
instance.result.stop()
def network_init(args, internal_targets): # type: (NetworkIntegrationConfig, t.Tuple[IntegrationTarget, ...]) -> None
"""Initialize platforms for network integration tests."""
if not args.platform:
return
if args.metadata.instance_config is not None:
return
platform_targets = set(a for target in internal_targets for a in target.aliases if a.startswith('network/'))
instances = [] # type: t.List[WrappedThread]
# generate an ssh key (if needed) up front once, instead of for each instance
SshKey(args)
for platform_version in args.platform:
platform, version = platform_version.split('/', 1)
platform_target = 'network/%s/' % platform
if platform_target not in platform_targets:
display.warning('Skipping "%s" because selected tests do not target the "%s" platform.' % (
platform_version, platform))
continue
instance = WrappedThread(functools.partial(network_start, args, platform, version))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
args.metadata.instance_config = [instance.wait_for_result() for instance in instances]
def network_start(args, platform, version):
"""
:type args: NetworkIntegrationConfig
:type platform: str
:type version: str
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider)
core_ci.start()
return core_ci.save()
def network_run(args, platform, version, config):
"""
:type args: NetworkIntegrationConfig
:type platform: str
:type version: str
:type config: dict[str, str]
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider, load=False)
core_ci.load(config)
core_ci.wait()
manage = ManageNetworkCI(core_ci)
manage.wait()
return core_ci
def network_inventory(remotes):
"""
:type remotes: list[AnsibleCoreCI]
:rtype: str
"""
groups = dict([(remote.platform, []) for remote in remotes])
net = []
for remote in remotes:
options = dict(
ansible_host=remote.connection.hostname,
ansible_user=remote.connection.username,
ansible_ssh_private_key_file=os.path.abspath(remote.ssh_key.key),
)
settings = get_network_settings(remote.args, remote.platform, remote.version)
options.update(settings.inventory_vars)
groups[remote.platform].append(
'%s %s' % (
remote.name.replace('.', '-'),
' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)),
)
)
net.append(remote.platform)
groups['net:children'] = net
template = ''
for group in groups:
hosts = '\n'.join(groups[group])
template += textwrap.dedent("""
[%s]
%s
""") % (group, hosts)
inventory = template
return inventory
def command_windows_integration(args):
"""
:type args: WindowsIntegrationConfig
"""
handle_layout_messages(data_context().content.integration_messages)
inventory_relative_path = get_inventory_relative_path(args)
template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template'
if args.inventory:
inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.inventory)
else:
inventory_path = os.path.join(data_context().content.root, inventory_relative_path)
if not args.explain and not args.windows and not os.path.isfile(inventory_path):
raise ApplicationError(
'Inventory not found: %s\n'
'Use --inventory to specify the inventory path.\n'
'Use --windows to provision resources and generate an inventory file.\n'
'See also inventory template: %s' % (inventory_path, template_path)
)
check_inventory(args, inventory_path)
delegate_inventory(args, inventory_path)
all_targets = tuple(walk_windows_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=windows_init)
instances = [] # type: t.List[WrappedThread]
pre_target = None
post_target = None
httptester_id = None
if args.windows:
get_python_path(args, args.python_executable) # initialize before starting threads
configs = dict((config['platform_version'], config) for config in args.metadata.instance_config)
for version in args.windows:
config = configs['windows/%s' % version]
instance = WrappedThread(functools.partial(windows_run, args, version, config))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
remotes = [instance.wait_for_result() for instance in instances]
inventory = windows_inventory(remotes)
display.info('>>> Inventory: %s\n%s' % (inventory_path, inventory.strip()), verbosity=3)
if not args.explain:
write_text_file(inventory_path, inventory)
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in internal_targets)
# if running under Docker delegation, the httptester may have already been started
docker_httptester = bool(os.environ.get("HTTPTESTER", False))
if use_httptester and not docker_available() and not docker_httptester:
display.warning('Assuming --disable-httptester since `docker` is not available.')
elif use_httptester:
if docker_httptester:
# we are running in a Docker container that is linked to the httptester container, we just need to
# forward these requests to the linked hostname
first_host = HTTPTESTER_HOSTS[0]
ssh_options = ["-R", "8080:%s:80" % first_host, "-R", "8443:%s:443" % first_host]
else:
# we are running directly and need to start the httptester container ourselves and forward the port
# from there manually set so HTTPTESTER env var is set during the run
args.inject_httptester = True
httptester_id, ssh_options = start_httptester(args)
# to get this SSH command to run in the background we need to set to run in background (-f) and disable
# the pty allocation (-T)
ssh_options.insert(0, "-fT")
# create a script that will continue to run in the background until the script is deleted, this will
# cleanup and close the connection
def forward_ssh_ports(target):
"""
:type target: IntegrationTarget
"""
if 'needs/httptester/' not in target.aliases:
return
for remote in [r for r in remotes if r.version != '2008']:
manage = ManageWindowsCI(remote)
manage.upload(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'setup', 'windows-httptester.ps1'), watcher_path)
# We cannot pass an array of string with -File so we just use a delimiter for multiple values
script = "powershell.exe -NoProfile -ExecutionPolicy Bypass -File .\\%s -Hosts \"%s\"" \
% (watcher_path, "|".join(HTTPTESTER_HOSTS))
if args.verbosity > 3:
script += " -Verbose"
manage.ssh(script, options=ssh_options, force_pty=False)
def cleanup_ssh_ports(target):
"""
:type target: IntegrationTarget
"""
if 'needs/httptester/' not in target.aliases:
return
for remote in [r for r in remotes if r.version != '2008']:
# delete the tmp file that keeps the http-tester alive
manage = ManageWindowsCI(remote)
manage.ssh("cmd.exe /c \"del %s /F /Q\"" % watcher_path, force_pty=False)
watcher_path = "ansible-test-http-watcher-%s.ps1" % time.time()
pre_target = forward_ssh_ports
post_target = cleanup_ssh_ports
def run_playbook(playbook, run_playbook_vars): # type: (str, t.Dict[str, t.Any]) -> None
playbook_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'playbooks', playbook)
command = ['ansible-playbook', '-i', inventory_path, playbook_path, '-e', json.dumps(run_playbook_vars)]
if args.verbosity:
command.append('-%s' % ('v' * args.verbosity))
env = ansible_environment(args)
intercept_command(args, command, '', env, disable_coverage=True)
remote_temp_path = None
if args.coverage and not args.coverage_check:
# Create the remote directory that is writable by everyone. Use Ansible to talk to the remote host.
remote_temp_path = 'C:\\ansible_test_coverage_%s' % time.time()
playbook_vars = {'remote_temp_path': remote_temp_path}
run_playbook('windows_coverage_setup.yml', playbook_vars)
success = False
try:
command_integration_filtered(args, internal_targets, all_targets, inventory_path, pre_target=pre_target,
post_target=post_target, remote_temp_path=remote_temp_path)
success = True
finally:
if httptester_id:
docker_rm(args, httptester_id)
if remote_temp_path:
# Zip up the coverage files that were generated and fetch it back to localhost.
with tempdir() as local_temp_path:
playbook_vars = {'remote_temp_path': remote_temp_path, 'local_temp_path': local_temp_path}
run_playbook('windows_coverage_teardown.yml', playbook_vars)
for filename in os.listdir(local_temp_path):
with open_zipfile(os.path.join(local_temp_path, filename)) as coverage_zip:
coverage_zip.extractall(ResultType.COVERAGE.path)
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
for instance in instances:
instance.result.stop()
# noinspection PyUnusedLocal
def windows_init(args, internal_targets): # pylint: disable=locally-disabled, unused-argument
"""
:type args: WindowsIntegrationConfig
:type internal_targets: tuple[IntegrationTarget]
"""
if not args.windows:
return
if args.metadata.instance_config is not None:
return
instances = [] # type: t.List[WrappedThread]
for version in args.windows:
instance = WrappedThread(functools.partial(windows_start, args, version))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
args.metadata.instance_config = [instance.wait_for_result() for instance in instances]
def windows_start(args, version):
"""
:type args: WindowsIntegrationConfig
:type version: str
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider)
core_ci.start()
return core_ci.save()
def windows_run(args, version, config):
"""
:type args: WindowsIntegrationConfig
:type version: str
:type config: dict[str, str]
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider, load=False)
core_ci.load(config)
core_ci.wait()
manage = ManageWindowsCI(core_ci)
manage.wait()
return core_ci
def windows_inventory(remotes):
"""
:type remotes: list[AnsibleCoreCI]
:rtype: str
"""
hosts = []
for remote in remotes:
options = dict(
ansible_host=remote.connection.hostname,
ansible_user=remote.connection.username,
ansible_password=remote.connection.password,
ansible_port=remote.connection.port,
)
# used for the connection_windows_ssh test target
if remote.ssh_key:
options["ansible_ssh_private_key_file"] = os.path.abspath(remote.ssh_key.key)
if remote.name == 'windows-2008':
options.update(
# force 2008 to use PSRP for the connection plugin
ansible_connection='psrp',
ansible_psrp_auth='basic',
ansible_psrp_cert_validation='ignore',
)
elif remote.name == 'windows-2016':
options.update(
# force 2016 to use NTLM + HTTP message encryption
ansible_connection='winrm',
ansible_winrm_server_cert_validation='ignore',
ansible_winrm_transport='ntlm',
ansible_winrm_scheme='http',
ansible_port='5985',
)
else:
options.update(
ansible_connection='winrm',
ansible_winrm_server_cert_validation='ignore',
)
hosts.append(
'%s %s' % (
remote.name.replace('/', '_'),
' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)),
)
)
template = """
[windows]
%s
# support winrm binary module tests (temporary solution)
[testhost:children]
windows
"""
template = textwrap.dedent(template)
inventory = template % ('\n'.join(hosts))
return inventory
def command_integration_filter(args, # type: TIntegrationConfig
targets, # type: t.Iterable[TIntegrationTarget]
init_callback=None, # type: t.Callable[[TIntegrationConfig, t.Tuple[TIntegrationTarget, ...]], None]
): # type: (...) -> t.Tuple[TIntegrationTarget, ...]
"""Filter the given integration test targets."""
targets = tuple(target for target in targets if 'hidden/' not in target.aliases)
changes = get_changes_filter(args)
# special behavior when the --changed-all-target target is selected based on changes
if args.changed_all_target in changes:
# act as though the --changed-all-target target was in the include list
if args.changed_all_mode == 'include' and args.changed_all_target not in args.include:
args.include.append(args.changed_all_target)
args.delegate_args += ['--include', args.changed_all_target]
# act as though the --changed-all-target target was in the exclude list
elif args.changed_all_mode == 'exclude' and args.changed_all_target not in args.exclude:
args.exclude.append(args.changed_all_target)
require = args.require + changes
exclude = args.exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
environment_exclude = get_integration_filter(args, internal_targets)
environment_exclude += cloud_filter(args, internal_targets)
if environment_exclude:
exclude += environment_exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
if not internal_targets:
raise AllTargetsSkipped()
if args.start_at and not any(target.name == args.start_at for target in internal_targets):
raise ApplicationError('Start at target matches nothing: %s' % args.start_at)
if init_callback:
init_callback(args, internal_targets)
cloud_init(args, internal_targets)
vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path)
if os.path.exists(vars_file_src):
def integration_config_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None
"""
Add the integration config vars file to the payload file list.
This will preserve the file during delegation even if the file is ignored by source control.
"""
files.append((vars_file_src, data_context().content.integration_vars_path))
data_context().register_payload_callback(integration_config_callback)
if args.delegate:
raise Delegate(require=require, exclude=exclude, integration_targets=internal_targets)
install_command_requirements(args)
return internal_targets
def command_integration_filtered(args, targets, all_targets, inventory_path, pre_target=None, post_target=None,
remote_temp_path=None):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:type all_targets: tuple[IntegrationTarget]
:type inventory_path: str
:type pre_target: (IntegrationTarget) -> None | None
:type post_target: (IntegrationTarget) -> None | None
:type remote_temp_path: str | None
"""
found = False
passed = []
failed = []
targets_iter = iter(targets)
all_targets_dict = dict((target.name, target) for target in all_targets)
setup_errors = []
setup_targets_executed = set()
for target in all_targets:
for setup_target in target.setup_once + target.setup_always:
if setup_target not in all_targets_dict:
setup_errors.append('Target "%s" contains invalid setup target: %s' % (target.name, setup_target))
if setup_errors:
raise ApplicationError('Found %d invalid setup aliases:\n%s' % (len(setup_errors), '\n'.join(setup_errors)))
check_pyyaml(args, args.python_version)
test_dir = os.path.join(ResultType.TMP.path, 'output_dir')
if not args.explain and any('needs/ssh/' in target.aliases for target in targets):
max_tries = 20
display.info('SSH service required for tests. Checking to make sure we can connect.')
for i in range(1, max_tries + 1):
try:
run_command(args, ['ssh', '-o', 'BatchMode=yes', 'localhost', 'id'], capture=True)
display.info('SSH service responded.')
break
except SubprocessError:
if i == max_tries:
raise
seconds = 3
display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds)
time.sleep(seconds)
# Windows is different as Ansible execution is done locally but the host is remote
if args.inject_httptester and not isinstance(args, WindowsIntegrationConfig):
inject_httptester(args)
start_at_task = args.start_at_task
results = {}
current_environment = None # type: t.Optional[EnvironmentDescription]
# common temporary directory path that will be valid on both the controller and the remote
# it must be common because it will be referenced in environment variables that are shared across multiple hosts
common_temp_path = '/tmp/ansible-test-%s' % ''.join(random.choice(string.ascii_letters + string.digits) for _idx in range(8))
setup_common_temp_dir(args, common_temp_path)
try:
for target in targets_iter:
if args.start_at and not found:
found = target.name == args.start_at
if not found:
continue
if args.list_targets:
print(target.name)
continue
tries = 2 if args.retry_on_error else 1
verbosity = args.verbosity
cloud_environment = get_cloud_environment(args, target)
original_environment = current_environment if current_environment else EnvironmentDescription(args)
current_environment = None
display.info('>>> Environment Description\n%s' % original_environment, verbosity=3)
try:
while tries:
tries -= 1
try:
if cloud_environment:
cloud_environment.setup_once()
run_setup_targets(args, test_dir, target.setup_once, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, False)
start_time = time.time()
run_setup_targets(args, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, True)
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if pre_target:
pre_target(target)
try:
if target.script_path:
command_integration_script(args, target, test_dir, inventory_path, common_temp_path,
remote_temp_path=remote_temp_path)
else:
command_integration_role(args, target, start_at_task, test_dir, inventory_path,
common_temp_path, remote_temp_path=remote_temp_path)
start_at_task = None
finally:
if post_target:
post_target(target)
end_time = time.time()
results[target.name] = dict(
name=target.name,
type=target.type,
aliases=target.aliases,
modules=target.modules,
run_time_seconds=int(end_time - start_time),
setup_once=target.setup_once,
setup_always=target.setup_always,
coverage=args.coverage,
coverage_label=args.coverage_label,
python_version=args.python_version,
)
break
except SubprocessError:
if cloud_environment:
cloud_environment.on_failure(target, tries)
if not original_environment.validate(target.name, throw=False):
raise
if not tries:
raise
display.warning('Retrying test target "%s" with maximum verbosity.' % target.name)
display.verbosity = args.verbosity = 6
start_time = time.time()
current_environment = EnvironmentDescription(args)
end_time = time.time()
EnvironmentDescription.check(original_environment, current_environment, target.name, throw=True)
results[target.name]['validation_seconds'] = int(end_time - start_time)
passed.append(target)
except Exception as ex:
failed.append(target)
if args.continue_on_error:
display.error(ex)
continue
display.notice('To resume at this test target, use the option: --start-at %s' % target.name)
next_target = next(targets_iter, None)
if next_target:
display.notice('To resume after this test target, use the option: --start-at %s' % next_target.name)
raise
finally:
display.verbosity = args.verbosity = verbosity
finally:
if not args.explain:
if args.coverage:
coverage_temp_path = os.path.join(common_temp_path, ResultType.COVERAGE.name)
coverage_save_path = ResultType.COVERAGE.path
for filename in os.listdir(coverage_temp_path):
shutil.copy(os.path.join(coverage_temp_path, filename), os.path.join(coverage_save_path, filename))
remove_tree(common_temp_path)
result_name = '%s-%s.json' % (
args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0))))
data = dict(
targets=results,
)
write_json_test_results(ResultType.DATA, result_name, data)
if failed:
raise ApplicationError('The %d integration test(s) listed below (out of %d) failed. See error output above for details:\n%s' % (
len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed)))
def start_httptester(args):
"""
:type args: EnvironmentConfig
:rtype: str, list[str]
"""
# map ports from remote -> localhost -> container
# passing through localhost is only used when ansible-test is not already running inside a docker container
ports = [
dict(
remote=8080,
container=80,
),
dict(
remote=8088,
container=88,
),
dict(
remote=8443,
container=443,
),
dict(
remote=8749,
container=749,
),
]
container_id = get_docker_container_id()
if not container_id:
for item in ports:
item['localhost'] = get_available_port()
docker_pull(args, args.httptester)
httptester_id = run_httptester(args, dict((port['localhost'], port['container']) for port in ports if 'localhost' in port))
if container_id:
container_host = get_docker_container_ip(args, httptester_id)
display.info('Found httptester container address: %s' % container_host, verbosity=1)
else:
container_host = get_docker_hostname()
ssh_options = []
for port in ports:
ssh_options += ['-R', '%d:%s:%d' % (port['remote'], container_host, port.get('localhost', port['container']))]
return httptester_id, ssh_options
def run_httptester(args, ports=None):
"""
:type args: EnvironmentConfig
:type ports: dict[int, int] | None
:rtype: str
"""
options = [
'--detach',
'--env', 'KRB5_PASSWORD=%s' % args.httptester_krb5_password,
]
if ports:
for localhost_port, container_port in ports.items():
options += ['-p', '%d:%d' % (localhost_port, container_port)]
network = get_docker_preferred_network_name(args)
if is_docker_user_defined_network(network):
# network-scoped aliases are only supported for containers in user defined networks
for alias in HTTPTESTER_HOSTS:
options.extend(['--network-alias', alias])
httptester_id = docker_run(args, args.httptester, options=options)[0]
if args.explain:
httptester_id = 'httptester_id'
else:
httptester_id = httptester_id.strip()
return httptester_id
def inject_httptester(args):
"""
:type args: CommonConfig
"""
comment = ' # ansible-test httptester\n'
append_lines = ['127.0.0.1 %s%s' % (host, comment) for host in HTTPTESTER_HOSTS]
hosts_path = '/etc/hosts'
original_lines = read_text_file(hosts_path).splitlines(True)
if not any(line.endswith(comment) for line in original_lines):
write_text_file(hosts_path, ''.join(original_lines + append_lines))
# determine which forwarding mechanism to use
pfctl = find_executable('pfctl', required=False)
iptables = find_executable('iptables', required=False)
if pfctl:
kldload = find_executable('kldload', required=False)
if kldload:
try:
run_command(args, ['kldload', 'pf'], capture=True)
except SubprocessError:
pass # already loaded
rules = '''
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
rdr pass inet proto tcp from any to any port 88 -> 127.0.0.1 port 8088
rdr pass inet proto tcp from any to any port 443 -> 127.0.0.1 port 8443
rdr pass inet proto tcp from any to any port 749 -> 127.0.0.1 port 8749
'''
cmd = ['pfctl', '-ef', '-']
try:
run_command(args, cmd, capture=True, data=rules)
except SubprocessError:
pass # non-zero exit status on success
elif iptables:
ports = [
(80, 8080),
(88, 8088),
(443, 8443),
(749, 8749),
]
for src, dst in ports:
rule = ['-o', 'lo', '-p', 'tcp', '--dport', str(src), '-j', 'REDIRECT', '--to-port', str(dst)]
try:
# check for existing rule
cmd = ['iptables', '-t', 'nat', '-C', 'OUTPUT'] + rule
run_command(args, cmd, capture=True)
except SubprocessError:
# append rule when it does not exist
cmd = ['iptables', '-t', 'nat', '-A', 'OUTPUT'] + rule
run_command(args, cmd, capture=True)
else:
raise ApplicationError('No supported port forwarding mechanism detected.')
def run_setup_targets(args, test_dir, target_names, targets_dict, targets_executed, inventory_path, temp_path, always):
"""
:type args: IntegrationConfig
:type test_dir: str
:type target_names: list[str]
:type targets_dict: dict[str, IntegrationTarget]
:type targets_executed: set[str]
:type inventory_path: str
:type temp_path: str
:type always: bool
"""
for target_name in target_names:
if not always and target_name in targets_executed:
continue
target = targets_dict[target_name]
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if target.script_path:
command_integration_script(args, target, test_dir, inventory_path, temp_path)
else:
command_integration_role(args, target, None, test_dir, inventory_path, temp_path)
targets_executed.add(target_name)
def integration_environment(args, target, test_dir, inventory_path, ansible_config, env_config):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type test_dir: str
:type inventory_path: str
:type ansible_config: str | None
:type env_config: CloudEnvironmentConfig | None
:rtype: dict[str, str]
"""
env = ansible_environment(args, ansible_config=ansible_config)
if args.inject_httptester:
env.update(dict(
HTTPTESTER='1',
KRB5_PASSWORD=args.httptester_krb5_password,
))
callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else [])
integration = dict(
JUNIT_OUTPUT_DIR=ResultType.JUNIT.path,
ANSIBLE_CALLBACK_ENABLED=','.join(sorted(set(callback_plugins))),
ANSIBLE_TEST_CI=args.metadata.ci_provider or get_ci_provider().code,
ANSIBLE_TEST_COVERAGE='check' if args.coverage_check else ('yes' if args.coverage else ''),
OUTPUT_DIR=test_dir,
INVENTORY_PATH=os.path.abspath(inventory_path),
)
if args.debug_strategy:
env.update(dict(ANSIBLE_STRATEGY='debug'))
if 'non_local/' in target.aliases:
if args.coverage:
display.warning('Skipping coverage reporting on Ansible modules for non-local test: %s' % target.name)
env.update(dict(ANSIBLE_TEST_REMOTE_INTERPRETER=''))
env.update(integration)
return env
def command_integration_script(args, target, test_dir, inventory_path, temp_path, remote_temp_path=None):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type test_dir: str
:type inventory_path: str
:type temp_path: str
:type remote_temp_path: str | None
"""
display.info('Running %s integration test script' % target.name)
env_config = None
if isinstance(args, PosixIntegrationConfig):
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
env_config = cloud_environment.get_environment_config()
with integration_test_environment(args, target, inventory_path) as test_env:
cmd = ['./%s' % os.path.basename(target.script_path)]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config)
cwd = os.path.join(test_env.targets_dir, target.relative_path)
env.update(dict(
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
ANSIBLE_PLAYBOOK_DIR=cwd,
))
if env_config and env_config.env_vars:
env.update(env_config.env_vars)
with integration_test_config_file(args, env_config, test_env.integration_dir) as config_path:
if config_path:
cmd += ['-e', '@%s' % config_path]
module_coverage = 'non_local/' not in target.aliases
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path,
remote_temp_path=remote_temp_path, module_coverage=module_coverage)
def command_integration_role(args, target, start_at_task, test_dir, inventory_path, temp_path, remote_temp_path=None):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type start_at_task: str | None
:type test_dir: str
:type inventory_path: str
:type temp_path: str
:type remote_temp_path: str | None
"""
display.info('Running %s integration test role' % target.name)
env_config = None
vars_files = []
variables = dict(
output_dir=test_dir,
)
if isinstance(args, WindowsIntegrationConfig):
hosts = 'windows'
gather_facts = False
variables.update(dict(
win_output_dir=r'C:\ansible_testing',
))
elif isinstance(args, NetworkIntegrationConfig):
hosts = target.network_platform
gather_facts = False
else:
hosts = 'testhost'
gather_facts = True
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
env_config = cloud_environment.get_environment_config()
with integration_test_environment(args, target, inventory_path) as test_env:
if os.path.exists(test_env.vars_file):
vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir))
play = dict(
hosts=hosts,
gather_facts=gather_facts,
vars_files=vars_files,
vars=variables,
roles=[
target.name,
],
)
if env_config:
if env_config.ansible_vars:
variables.update(env_config.ansible_vars)
play.update(dict(
environment=env_config.env_vars,
module_defaults=env_config.module_defaults,
))
playbook = json.dumps([play], indent=4, sort_keys=True)
with named_temporary_file(args=args, directory=test_env.integration_dir, prefix='%s-' % target.name, suffix='.yml', content=playbook) as playbook_path:
filename = os.path.basename(playbook_path)
display.info('>>> Playbook: %s\n%s' % (filename, playbook.strip()), verbosity=3)
cmd = ['ansible-playbook', filename, '-i', os.path.relpath(test_env.inventory_path, test_env.integration_dir)]
if start_at_task:
cmd += ['--start-at-task', start_at_task]
if args.tags:
cmd += ['--tags', args.tags]
if args.skip_tags:
cmd += ['--skip-tags', args.skip_tags]
if args.diff:
cmd += ['--diff']
if isinstance(args, NetworkIntegrationConfig):
if args.testcase:
cmd += ['-e', 'testcase=%s' % args.testcase]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config)
cwd = test_env.integration_dir
env.update(dict(
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
ANSIBLE_PLAYBOOK_DIR=cwd,
))
env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir
module_coverage = 'non_local/' not in target.aliases
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path,
remote_temp_path=remote_temp_path, module_coverage=module_coverage)
def get_changes_filter(args):
"""
:type args: TestConfig
:rtype: list[str]
"""
paths = detect_changes(args)
if not args.metadata.change_description:
if paths:
changes = categorize_changes(args, paths, args.command)
else:
changes = ChangeDescription()
args.metadata.change_description = changes
if paths is None:
return [] # change detection not enabled, do not filter targets
if not paths:
raise NoChangesDetected()
if args.metadata.change_description.targets is None:
raise NoTestsForChanges()
return args.metadata.change_description.targets
def detect_changes(args):
"""
:type args: TestConfig
:rtype: list[str] | None
"""
if args.changed:
paths = get_ci_provider().detect_changes(args)
elif args.changed_from or args.changed_path:
paths = args.changed_path or []
if args.changed_from:
paths += read_text_file(args.changed_from).splitlines()
else:
return None # change detection not enabled
if paths is None:
return None # act as though change detection not enabled, do not filter targets
display.info('Detected changes in %d file(s).' % len(paths))
for path in paths:
display.info(path, verbosity=1)
return paths
def get_integration_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
if args.docker:
return get_integration_docker_filter(args, targets)
if args.remote:
return get_integration_remote_filter(args, targets)
return get_integration_local_filter(args, targets)
def common_integration_filter(args, targets, exclude):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:type exclude: list[str]
"""
override_disabled = set(target for target in args.include if target.startswith('disabled/'))
if not args.allow_disabled:
skip = 'disabled/'
override = [target.name for target in targets if override_disabled & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-disabled or prefixing with "disabled/": %s'
% (skip.rstrip('/'), ', '.join(skipped)))
override_unsupported = set(target for target in args.include if target.startswith('unsupported/'))
if not args.allow_unsupported:
skip = 'unsupported/'
override = [target.name for target in targets if override_unsupported & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-unsupported or prefixing with "unsupported/": %s'
% (skip.rstrip('/'), ', '.join(skipped)))
override_unstable = set(target for target in args.include if target.startswith('unstable/'))
if args.allow_unstable_changed:
override_unstable |= set(args.metadata.change_description.focused_targets or [])
if not args.allow_unstable:
skip = 'unstable/'
override = [target.name for target in targets if override_unstable & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-unstable or prefixing with "unstable/": %s'
% (skip.rstrip('/'), ', '.join(skipped)))
# only skip a Windows test if using --windows and all the --windows versions are defined in the aliases as skip/windows/%s
if isinstance(args, WindowsIntegrationConfig) and args.windows:
all_skipped = []
not_skipped = []
for target in targets:
if "skip/windows/" not in target.aliases:
continue
skip_valid = []
skip_missing = []
for version in args.windows:
if "skip/windows/%s/" % version in target.aliases:
skip_valid.append(version)
else:
skip_missing.append(version)
if skip_missing and skip_valid:
not_skipped.append((target.name, skip_valid, skip_missing))
elif skip_valid:
all_skipped.append(target.name)
if all_skipped:
exclude.extend(all_skipped)
skip_aliases = ["skip/windows/%s/" % w for w in args.windows]
display.warning('Excluding tests marked "%s" which are set to skip with --windows %s: %s'
% ('", "'.join(skip_aliases), ', '.join(args.windows), ', '.join(all_skipped)))
if not_skipped:
for target, skip_valid, skip_missing in not_skipped:
# warn when failing to skip due to lack of support for skipping only some versions
display.warning('Including test "%s" which was marked to skip for --windows %s but not %s.'
% (target, ', '.join(skip_valid), ', '.join(skip_missing)))
def get_integration_local_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
exclude = []
common_integration_filter(args, targets, exclude)
if not args.allow_root and os.getuid() != 0:
skip = 'needs/root/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require --allow-root or running as root: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
override_destructive = set(target for target in args.include if target.startswith('destructive/'))
if not args.allow_destructive:
skip = 'destructive/'
override = [target.name for target in targets if override_destructive & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-destructive or prefixing with "destructive/" to run locally: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
exclude_targets_by_python_version(targets, args.python_version, exclude)
return exclude
def get_integration_docker_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
exclude = []
common_integration_filter(args, targets, exclude)
skip = 'skip/docker/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which cannot run under docker: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
if not args.docker_privileged:
skip = 'needs/privileged/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require --docker-privileged to run under docker: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
python_version = get_python_version(args, get_docker_completion(), args.docker_raw)
exclude_targets_by_python_version(targets, python_version, exclude)
return exclude
def get_integration_remote_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
remote = args.parsed_remote
exclude = []
common_integration_filter(args, targets, exclude)
skips = {
'skip/%s' % remote.platform: remote.platform,
'skip/%s/%s' % (remote.platform, remote.version): '%s %s' % (remote.platform, remote.version),
'skip/%s%s' % (remote.platform, remote.version): '%s %s' % (remote.platform, remote.version), # legacy syntax, use above format
}
if remote.arch:
skips.update({
'skip/%s/%s' % (remote.arch, remote.platform): '%s on %s' % (remote.platform, remote.arch),
'skip/%s/%s/%s' % (remote.arch, remote.platform, remote.version): '%s %s on %s' % (remote.platform, remote.version, remote.arch),
})
for skip, description in skips.items():
skipped = [target.name for target in targets if skip in target.skips]
if skipped:
exclude.append(skip + '/')
display.warning('Excluding tests marked "%s" which are not supported on %s: %s' % (skip, description, ', '.join(skipped)))
python_version = get_python_version(args, get_remote_completion(), args.remote)
exclude_targets_by_python_version(targets, python_version, exclude)
return exclude
def exclude_targets_by_python_version(targets, python_version, exclude):
"""
:type targets: tuple[IntegrationTarget]
:type python_version: str
:type exclude: list[str]
"""
if not python_version:
display.warning('Python version unknown. Unable to skip tests based on Python version.')
return
python_major_version = python_version.split('.')[0]
skip = 'skip/python%s/' % python_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %s: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
skip = 'skip/python%s/' % python_major_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %s: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
def get_python_version(args, configs, name):
"""
:type args: EnvironmentConfig
:type configs: dict[str, dict[str, str]]
:type name: str
"""
config = configs.get(name, {})
config_python = config.get('python')
if not config or not config_python:
if args.python:
return args.python
display.warning('No Python version specified. '
'Use completion config or the --python option to specify one.', unique=True)
return '' # failure to provide a version may result in failures or reduced functionality later
supported_python_versions = config_python.split(',')
default_python_version = supported_python_versions[0]
if args.python and args.python not in supported_python_versions:
raise ApplicationError('Python %s is not supported by %s. Supported Python version(s) are: %s' % (
args.python, name, ', '.join(sorted(supported_python_versions))))
python_version = args.python or default_python_version
return python_version
def get_python_interpreter(args, configs, name):
"""
:type args: EnvironmentConfig
:type configs: dict[str, dict[str, str]]
:type name: str
"""
if args.python_interpreter:
return args.python_interpreter
config = configs.get(name, {})
if not config:
if args.python:
guess = 'python%s' % args.python
else:
guess = 'python'
display.warning('Using "%s" as the Python interpreter. '
'Use completion config or the --python-interpreter option to specify the path.' % guess, unique=True)
return guess
python_version = get_python_version(args, configs, name)
python_dir = config.get('python_dir', '/usr/bin')
python_interpreter = os.path.join(python_dir, 'python%s' % python_version)
python_interpreter = config.get('python%s' % python_version, python_interpreter)
return python_interpreter
class EnvironmentDescription:
"""Description of current running environment."""
def __init__(self, args):
"""Initialize snapshot of environment configuration.
:type args: IntegrationConfig
"""
self.args = args
if self.args.explain:
self.data = {}
return
warnings = []
versions = ['']
versions += SUPPORTED_PYTHON_VERSIONS
versions += list(set(v.split('.')[0] for v in SUPPORTED_PYTHON_VERSIONS))
version_check = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'versions.py')
python_paths = dict((v, find_executable('python%s' % v, required=False)) for v in sorted(versions))
pip_paths = dict((v, find_executable('pip%s' % v, required=False)) for v in sorted(versions))
program_versions = dict((v, self.get_version([python_paths[v], version_check], warnings)) for v in sorted(python_paths) if python_paths[v])
pip_interpreters = dict((v, self.get_shebang(pip_paths[v])) for v in sorted(pip_paths) if pip_paths[v])
known_hosts_hash = self.get_hash(os.path.expanduser('~/.ssh/known_hosts'))
for version in sorted(versions):
self.check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings)
for warning in warnings:
display.warning(warning, unique=True)
self.data = dict(
python_paths=python_paths,
pip_paths=pip_paths,
program_versions=program_versions,
pip_interpreters=pip_interpreters,
known_hosts_hash=known_hosts_hash,
warnings=warnings,
)
@staticmethod
def check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings):
"""
:type version: str
:param python_paths: dict[str, str]
:param pip_paths: dict[str, str]
:param pip_interpreters: dict[str, str]
:param warnings: list[str]
"""
python_label = 'Python%s' % (' %s' % version if version else '')
pip_path = pip_paths.get(version)
python_path = python_paths.get(version)
if not python_path and not pip_path:
# neither python or pip is present for this version
return
if not python_path:
warnings.append('A %s interpreter was not found, yet a matching pip was found at "%s".' % (python_label, pip_path))
return
if not pip_path:
warnings.append('A %s interpreter was found at "%s", yet a matching pip was not found.' % (python_label, python_path))
return
pip_shebang = pip_interpreters.get(version)
match = re.search(r'#!\s*(?P<command>[^\s]+)', pip_shebang)
if not match:
warnings.append('A %s pip was found at "%s", but it does not have a valid shebang: %s' % (python_label, pip_path, pip_shebang))
return
pip_interpreter = os.path.realpath(match.group('command'))
python_interpreter = os.path.realpath(python_path)
if pip_interpreter == python_interpreter:
return
try:
identical = filecmp.cmp(pip_interpreter, python_interpreter)
except OSError:
identical = False
if identical:
return
warnings.append('A %s pip was found at "%s", but it uses interpreter "%s" instead of "%s".' % (
python_label, pip_path, pip_interpreter, python_interpreter))
def __str__(self):
"""
:rtype: str
"""
return json.dumps(self.data, sort_keys=True, indent=4)
def validate(self, target_name, throw):
"""
:type target_name: str
:type throw: bool
:rtype: bool
"""
current = EnvironmentDescription(self.args)
return self.check(self, current, target_name, throw)
@staticmethod
def check(original, current, target_name, throw):
"""
:type original: EnvironmentDescription
:type current: EnvironmentDescription
:type target_name: str
:type throw: bool
:rtype: bool
"""
original_json = str(original)
current_json = str(current)
if original_json == current_json:
return True
unified_diff = '\n'.join(difflib.unified_diff(
a=original_json.splitlines(),
b=current_json.splitlines(),
fromfile='original.json',
tofile='current.json',
lineterm='',
))
message = ('Test target "%s" has changed the test environment!\n'
'If these changes are necessary, they must be reverted before the test finishes.\n'
'>>> Original Environment\n'
'%s\n'
'>>> Current Environment\n'
'%s\n'
'>>> Environment Diff\n'
'%s'
% (target_name, original_json, current_json, unified_diff))
if throw:
raise ApplicationError(message)
display.error(message)
return False
@staticmethod
def get_version(command, warnings):
"""
:type command: list[str]
:type warnings: list[text]
:rtype: list[str]
"""
try:
stdout, stderr = raw_command(command, capture=True, cmd_verbosity=2)
except SubprocessError as ex:
warnings.append(u'%s' % ex)
return None # all failures are equal, we don't care why it failed, only that it did
return [line.strip() for line in ((stdout or '').strip() + (stderr or '').strip()).splitlines()]
@staticmethod
def get_shebang(path):
"""
:type path: str
:rtype: str
"""
with open_text_file(path) as script_fd:
return script_fd.readline().strip()
@staticmethod
def get_hash(path):
"""
:type path: str
:rtype: str | None
"""
if not os.path.exists(path):
return None
file_hash = hashlib.md5()
file_hash.update(read_binary_file(path))
return file_hash.hexdigest()
class NoChangesDetected(ApplicationWarning):
"""Exception when change detection was performed, but no changes were found."""
def __init__(self):
super(NoChangesDetected, self).__init__('No changes detected.')
class NoTestsForChanges(ApplicationWarning):
"""Exception when changes detected, but no tests trigger as a result."""
def __init__(self):
super(NoTestsForChanges, self).__init__('No tests found for detected changes.')
class Delegate(Exception):
"""Trigger command delegation."""
def __init__(self, exclude=None, require=None, integration_targets=None):
"""
:type exclude: list[str] | None
:type require: list[str] | None
:type integration_targets: tuple[IntegrationTarget] | None
"""
super(Delegate, self).__init__()
self.exclude = exclude or []
self.require = require or []
self.integration_targets = integration_targets or tuple()
class AllTargetsSkipped(ApplicationWarning):
"""All targets skipped."""
def __init__(self):
super(AllTargetsSkipped, self).__init__('All targets skipped.')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,024 |
Add example of how to use the required_if and other parts of argspec in modules
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Based on a request in IRC:
Currently, the only documentation about `required_if` exists in the Windows module development doc:
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_windows.html
There may be other parts of the argspec we don't currently document clearly as well.
Steps to fix this include:
- [ ] add that detail to https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general
- [ ] Add examples as well. This can be taken from the source code - https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/common/validation.py#L198-L237
- [ ] Verify ^^^ that page should is still accurate, especially the example code.
- [ ] Add relevant details to this page as well https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html#ansiblemodule
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/70024
|
https://github.com/ansible/ansible/pull/72335
|
569d937df8cfc2bd0e4c9f62b5b25021ae5e5cc1
|
01d207a3e344ab8e2c79085eb1a9a1efd8f65f80
| 2020-06-11T19:09:10Z |
python
| 2020-11-04T17:17:46Z |
docs/docsite/rst/dev_guide/developing_modules_general.rst
|
.. _developing_modules_general:
.. _module_dev_tutorial_sample:
**************************
Developing Ansible modules
**************************
A module is a reusable, standalone script that Ansible runs on your behalf, either locally or remotely. Modules interact with your local machine, an API, or a remote system to perform specific tasks like changing a database password or spinning up a cloud instance. Each module can be used by the Ansible API, or by the :command:`ansible` or :command:`ansible-playbook` programs. A module provides a defined interface, accepts arguments, and returns information to Ansible by printing a JSON string to stdout before exiting.
If you need functionality that is not available in any of the thousands of Ansible modules found in collections, you can easily write your own custom module. When you write a module for local use, you can choose any programming language and follow your own rules. Use this topic to learn how to create an Ansible module in Python. After you create a module, you must add it locally to the appropriate directory so that Ansible can find and execute it. For details about adding a module locally, see :ref:`developing_locally`.
.. contents::
:local:
.. _environment_setup:
Preparing an environment for developing Ansible modules
=======================================================
Installing prerequisites via apt (Ubuntu)
-----------------------------------------
Due to dependencies (for example ansible -> paramiko -> pynacl -> libffi):
.. code:: bash
sudo apt update
sudo apt install build-essential libssl-dev libffi-dev python-dev
Creating a development environment (platform-agnostic steps)
------------------------------------------------------------
1. Clone the Ansible repository:
``$ git clone https://github.com/ansible/ansible.git``
2. Change directory into the repository root dir: ``$ cd ansible``
3. Create a virtual environment: ``$ python3 -m venv venv`` (or for
Python 2 ``$ virtualenv venv``. Note, this requires you to install
the virtualenv package: ``$ pip install virtualenv``)
4. Activate the virtual environment: ``$ . venv/bin/activate``
5. Install development requirements:
``$ pip install -r requirements.txt``
6. Run the environment setup script for each new dev shell process:
``$ . hacking/env-setup``
.. note:: After the initial setup above, every time you are ready to start
developing Ansible you should be able to just run the following from the
root of the Ansible repo:
``$ . venv/bin/activate && . hacking/env-setup``
Creating an info or a facts module
==================================
Ansible gathers information about the target machines using facts modules, and gathers information on other objects or files using info modules.
If you find yourself trying to add ``state: info`` or ``state: list`` to an existing module, that is often a sign that a new dedicated ``_facts`` or ``_info`` module is needed.
In Ansible 2.8 and onwards, we have two type of information modules, they are ``*_info`` and ``*_facts``.
If a module is named ``<something>_facts``, it should be because its main purpose is returning ``ansible_facts``. Do not name modules that do not do this with ``_facts``.
Only use ``ansible_facts`` for information that is specific to the host machine, for example network interfaces and their configuration, which operating system and which programs are installed.
Modules that query/return general information (and not ``ansible_facts``) should be named ``_info``.
General information is non-host specific information, for example information on online/cloud services (you can access different accounts for the same online service from the same host), or information on VMs and containers accessible from the machine, or information on individual files or programs.
Info and facts modules, are just like any other Ansible Module, with a few minor requirements:
1. They MUST be named ``<something>_info`` or ``<something>_facts``, where <something> is singular.
2. Info ``*_info`` modules MUST return in the form of the :ref:`result dictionary<common_return_values>` so other modules can access them.
3. Fact ``*_facts`` modules MUST return in the ``ansible_facts`` field of the :ref:`result dictionary<common_return_values>` so other modules can access them.
4. They MUST support :ref:`check_mode <check_mode_dry>`.
5. They MUST NOT make any changes to the system.
6. They MUST document the :ref:`return fields<return_block>` and :ref:`examples<examples_block>`.
To create an info module:
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``. If you are developing module using collection, ``$ cd plugins/modules/`` inside your collection development tree.
2. Create your new module file: ``$ touch my_test_info.py``.
3. Paste the content below into your new info module file. It includes the :ref:`required Ansible format and documentation <developing_modules_documenting>` and some example code.
4. Modify and extend the code to do what you want your new info module to do. See the :ref:`programming tips <developing_modules_best_practices>` and :ref:`Python 3 compatibility <developing_python_3>` pages for pointers on writing clean and concise module code.
.. literalinclude:: ../../../../examples/scripts/my_test_info.py
:language: python
Use the same process to create a facts module.
.. literalinclude:: ../../../../examples/scripts/my_test_facts.py
:language: python
Creating a module
=================
To create a module:
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``. If you are developing a module in a :ref:`collection <developing_collections>`, ``$ cd plugins/modules/`` inside your collection development tree.
2. Create your new module file: ``$ touch my_test.py``.
3. Paste the content below into your new module file. It includes the :ref:`required Ansible format and documentation <developing_modules_documenting>` and some example code.
4. Modify and extend the code to do what you want your new module to do. See the :ref:`programming tips <developing_modules_best_practices>` and :ref:`Python 3 compatibility <developing_python_3>` pages for pointers on writing clean and concise module code.
.. literalinclude:: ../../../../examples/scripts/my_test.py
:language: python
Verifying your module code
==========================
After you modify the sample code above to do what you want, you can try out your module.
Our :ref:`debugging tips <debugging_modules>` will help if you run into bugs as you verify your module code.
Verifying your module code locally
----------------------------------
If your module does not need to target a remote host, you can quickly and easily exercise your code locally like this:
- Create an arguments file, a basic JSON config file that passes parameters to your module so that you can run it. Name the arguments file ``/tmp/args.json`` and add the following content:
.. code:: json
{
"ANSIBLE_MODULE_ARGS": {
"name": "hello",
"new": true
}
}
- If you are using a virtual environment (which is highly recommended for
development) activate it: ``$ . venv/bin/activate``
- Set up the environment for development: ``$ . hacking/env-setup``
- Run your test module locally and directly:
``$ python -m ansible.modules.my_test /tmp/args.json``
This should return output like this:
.. code:: json
{"changed": true, "state": {"original_message": "hello", "new_message": "goodbye"}, "invocation": {"module_args": {"name": "hello", "new": true}}}
Verifying your module code in a playbook
----------------------------------------
The next step in verifying your new module is to consume it with an Ansible playbook.
- Create a playbook in any directory: ``$ touch testmod.yml``
- Add the following to the new playbook file::
- name: test my new module
hosts: localhost
tasks:
- name: run the new module
my_test:
name: 'hello'
new: true
register: testout
- name: dump test output
debug:
msg: '{{ testout }}'
- Run the playbook and analyze the output: ``$ ansible-playbook ./testmod.yml``
Testing your newly-created module
=================================
The following two examples will get you started with testing your module code. Please review our :ref:`testing <developing_testing>` section for more detailed
information, including instructions for :ref:`testing module documentation <testing_module_documentation>`, adding :ref:`integration tests <testing_integration>`, and more.
.. note::
Every new module and plugin should have integration tests, even if the tests cannot be run on Ansible CI infrastructure.
In this case, the tests should be marked with the ``unsupported`` alias in `aliases file <https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/integration-aliases.html>`_.
Performing sanity tests
-----------------------
You can run through Ansible's sanity checks in a container:
``$ ansible-test sanity -v --docker --python 2.7 MODULE_NAME``
.. note::
Note that this example requires Docker to be installed and running. If you'd rather not use a container for this, you can choose to use ``--venv`` instead of ``--docker``.
Adding unit tests
-----------------
You can add unit tests for your module in ``./test/units/modules``. You must first set up your testing environment. In this example, we're using Python 3.5.
- Install the requirements (outside of your virtual environment): ``$ pip3 install -r ./test/lib/ansible_test/_data/requirements/units.txt``
- Run ``. hacking/env-setup``
- To run all tests do the following: ``$ ansible-test units --python 3.5``. If you are using a CI environment, these tests will run automatically.
.. note:: Ansible uses pytest for unit testing.
To run pytest against a single test module, you can run the following command. Ensure that you are providing the correct path of the test module:
``$ pytest -r a --cov=. --cov-report=html --fulltrace --color yes
test/units/modules/.../test/my_test.py``
Contributing back to Ansible
============================
If you would like to contribute to ``ansible-base`` by adding a new feature or fixing a bug, `create a fork <https://help.github.com/articles/fork-a-repo/>`_ of the ansible/ansible repository and develop against a new feature branch using the ``devel`` branch as a starting point. When you you have a good working code change, you can submit a pull request to the Ansible repository by selecting your feature branch as a source and the Ansible devel branch as a target.
If you want to contribute a module to an :ref:`Ansible collection <contributing_maintained_collections>`, review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request.
The :ref:`Community Guide <ansible_community_guide>` covers how to open a pull request and what happens next.
Communication and development support
=====================================
Join the IRC channel ``#ansible-devel`` on freenode for discussions
surrounding Ansible development.
For questions and discussions pertaining to using the Ansible product,
use the ``#ansible`` channel.
For more specific IRC channels look at :ref:`Community Guide, Communicating <communication_irc>`.
Credit
======
Thank you to Thomas Stringer (`@trstringer <https://github.com/trstringer>`_) for contributing source
material for this topic.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,024 |
Add example of how to use the required_if and other parts of argspec in modules
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Based on a request in IRC:
Currently, the only documentation about `required_if` exists in the Windows module development doc:
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_windows.html
There may be other parts of the argspec we don't currently document clearly as well.
Steps to fix this include:
- [ ] add that detail to https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general
- [ ] Add examples as well. This can be taken from the source code - https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/common/validation.py#L198-L237
- [ ] Verify ^^^ that page should is still accurate, especially the example code.
- [ ] Add relevant details to this page as well https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html#ansiblemodule
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/70024
|
https://github.com/ansible/ansible/pull/72335
|
569d937df8cfc2bd0e4c9f62b5b25021ae5e5cc1
|
01d207a3e344ab8e2c79085eb1a9a1efd8f65f80
| 2020-06-11T19:09:10Z |
python
| 2020-11-04T17:17:46Z |
docs/docsite/rst/dev_guide/developing_program_flow_modules.rst
|
.. _flow_modules:
.. _developing_program_flow_modules:
***************************
Ansible module architecture
***************************
If you are working on the ``ansible-base`` code, writing an Ansible module, or developing an action plugin, you may need to understand how Ansible's program flow executes. If you are just using Ansible Modules in playbooks, you can skip this section.
.. contents::
:local:
.. _flow_types_of_modules:
Types of modules
================
Ansible supports several different types of modules in its code base. Some of
these are for backwards compatibility and others are to enable flexibility.
.. _flow_action_plugins:
Action plugins
--------------
Action plugins look like modules to anyone writing a playbook. Usage documentation for most action plugins lives inside a module of the same name. Some action plugins do all the work, with the module providing only documentation. Some action plugins execute modules. The ``normal`` action plugin executes modules that don't have special action plugins. Action plugins always execute on the controller.
Some action plugins do all their work on the controller. For
example, the :ref:`debug <debug_module>` action plugin (which prints text for
the user to see) and the :ref:`assert <assert_module>` action plugin (which
tests whether values in a playbook satisfy certain criteria) execute entirely on the controller.
Most action plugins set up some values on the controller, then invoke an
actual module on the managed node that does something with these values. For example, the :ref:`template <template_module>` action plugin takes values from
the user to construct a file in a temporary location on the controller using
variables from the playbook environment. It then transfers the temporary file
to a temporary file on the remote system. After that, it invokes the
:ref:`copy module <copy_module>` which operates on the remote system to move the file
into its final location, sets file permissions, and so on.
.. _flow_new_style_modules:
New-style modules
-----------------
All of the modules that ship with Ansible fall into this category. While you can write modules in any language, all official modules (shipped with Ansible) use either Python or PowerShell.
New-style modules have the arguments to the module embedded inside of them in
some manner. Old-style modules must copy a separate file over to the
managed node, which is less efficient as it requires two over-the-wire
connections instead of only one.
.. _flow_python_modules:
Python
^^^^^^
New-style Python modules use the :ref:`Ansiballz` framework for constructing
modules. These modules use imports from :code:`ansible.module_utils` to pull in
boilerplate module code, such as argument parsing, formatting of return
values as :term:`JSON`, and various file operations.
.. note:: In Ansible, up to version 2.0.x, the official Python modules used the
:ref:`module_replacer` framework. For module authors, :ref:`Ansiballz` is
largely a superset of :ref:`module_replacer` functionality, so you usually
do not need to understand the differences between them.
.. _flow_powershell_modules:
PowerShell
^^^^^^^^^^
New-style PowerShell modules use the :ref:`module_replacer` framework for
constructing modules. These modules get a library of PowerShell code embedded
in them before being sent to the managed node.
.. _flow_jsonargs_modules:
JSONARGS modules
----------------
These modules are scripts that include the string
``<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>`` in their body.
This string is replaced with the JSON-formatted argument string. These modules typically set a variable to that value like this:
.. code-block:: python
json_arguments = """<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"""
Which is expanded as:
.. code-block:: python
json_arguments = """{"param1": "test's quotes", "param2": "\"To be or not to be\" - Hamlet"}"""
.. note:: Ansible outputs a :term:`JSON` string with bare quotes. Double quotes are
used to quote string values, double quotes inside of string values are
backslash escaped, and single quotes may appear unescaped inside of
a string value. To use JSONARGS, your scripting language must have a way
to handle this type of string. The example uses Python's triple quoted
strings to do this. Other scripting languages may have a similar quote
character that won't be confused by any quotes in the JSON or it may
allow you to define your own start-of-quote and end-of-quote characters.
If the language doesn't give you any of these then you'll need to write
a :ref:`non-native JSON module <flow_want_json_modules>` or
:ref:`Old-style module <flow_old_style_modules>` instead.
These modules typically parse the contents of ``json_arguments`` using a JSON
library and then use them as native variables throughout the code.
.. _flow_want_json_modules:
Non-native want JSON modules
----------------------------
If a module has the string ``WANT_JSON`` in it anywhere, Ansible treats
it as a non-native module that accepts a filename as its only command line
parameter. The filename is for a temporary file containing a :term:`JSON`
string containing the module's parameters. The module needs to open the file,
read and parse the parameters, operate on the data, and print its return data
as a JSON encoded dictionary to stdout before exiting.
These types of modules are self-contained entities. As of Ansible 2.1, Ansible
only modifies them to change a shebang line if present.
.. seealso:: Examples of Non-native modules written in ruby are in the `Ansible
for Rubyists <https://github.com/ansible/ansible-for-rubyists>`_ repository.
.. _flow_binary_modules:
Binary modules
--------------
From Ansible 2.2 onwards, modules may also be small binary programs. Ansible
doesn't perform any magic to make these portable to different systems so they
may be specific to the system on which they were compiled or require other
binary runtime dependencies. Despite these drawbacks, you may have
to compile a custom module against a specific binary
library if that's the only way to get access to certain resources.
Binary modules take their arguments and return data to Ansible in the same
way as :ref:`want JSON modules <flow_want_json_modules>`.
.. seealso:: One example of a `binary module
<https://github.com/ansible/ansible/blob/devel/test/integration/targets/binary_modules/library/helloworld.go>`_
written in go.
.. _flow_old_style_modules:
Old-style modules
-----------------
Old-style modules are similar to
:ref:`want JSON modules <flow_want_json_modules>`, except that the file that
they take contains ``key=value`` pairs for their parameters instead of
:term:`JSON`. Ansible decides that a module is old-style when it doesn't have
any of the markers that would show that it is one of the other types.
.. _flow_how_modules_are_executed:
How modules are executed
========================
When a user uses :program:`ansible` or :program:`ansible-playbook`, they
specify a task to execute. The task is usually the name of a module along
with several parameters to be passed to the module. Ansible takes these
values and processes them in various ways before they are finally executed on
the remote machine.
.. _flow_executor_task_executor:
Executor/task_executor
----------------------
The TaskExecutor receives the module name and parameters that were parsed from
the :term:`playbook <playbooks>` (or from the command line in the case of
:command:`/usr/bin/ansible`). It uses the name to decide whether it's looking
at a module or an :ref:`Action Plugin <flow_action_plugins>`. If it's
a module, it loads the :ref:`Normal Action Plugin <flow_normal_action_plugin>`
and passes the name, variables, and other information about the task and play
to that Action Plugin for further processing.
.. _flow_normal_action_plugin:
The ``normal`` action plugin
----------------------------
The ``normal`` action plugin executes the module on the remote host. It is
the primary coordinator of much of the work to actually execute the module on
the managed machine.
* It loads the appropriate connection plugin for the task, which then transfers
or executes as needed to create a connection to that host.
* It adds any internal Ansible properties to the module's parameters (for
instance, the ones that pass along ``no_log`` to the module).
* It works with other plugins (connection, shell, become, other action plugins)
to create any temporary files on the remote machine and
cleans up afterwards.
* It pushes the module and module parameters to the
remote host, although the :ref:`module_common <flow_executor_module_common>`
code described in the next section decides which format
those will take.
* It handles any special cases regarding modules (for instance, async
execution, or complications around Windows modules that must have the same names as Python modules, so that internal calling of modules from other Action Plugins work.)
Much of this functionality comes from the `BaseAction` class,
which lives in :file:`plugins/action/__init__.py`. It uses the
``Connection`` and ``Shell`` objects to do its work.
.. note::
When :term:`tasks <tasks>` are run with the ``async:`` parameter, Ansible
uses the ``async`` Action Plugin instead of the ``normal`` Action Plugin
to invoke it. That program flow is currently not documented. Read the
source for information on how that works.
.. _flow_executor_module_common:
Executor/module_common.py
-------------------------
Code in :file:`executor/module_common.py` assembles the module
to be shipped to the managed node. The module is first read in, then examined
to determine its type:
* :ref:`PowerShell <flow_powershell_modules>` and :ref:`JSON-args modules <flow_jsonargs_modules>` are passed through :ref:`Module Replacer <module_replacer>`.
* New-style :ref:`Python modules <flow_python_modules>` are assembled by :ref:`Ansiballz`.
* :ref:`Non-native-want-JSON <flow_want_json_modules>`, :ref:`Binary modules <flow_binary_modules>`, and :ref:`Old-Style modules <flow_old_style_modules>` aren't touched by either of these and pass through unchanged.
After the assembling step, one final
modification is made to all modules that have a shebang line. Ansible checks
whether the interpreter in the shebang line has a specific path configured via
an ``ansible_$X_interpreter`` inventory variable. If it does, Ansible
substitutes that path for the interpreter path given in the module. After
this, Ansible returns the complete module data and the module type to the
:ref:`Normal Action <flow_normal_action_plugin>` which continues execution of
the module.
Assembler frameworks
--------------------
Ansible supports two assembler frameworks: Ansiballz and the older Module Replacer.
.. _module_replacer:
Module Replacer framework
^^^^^^^^^^^^^^^^^^^^^^^^^
The Module Replacer framework is the original framework implementing new-style
modules, and is still used for PowerShell modules. It is essentially a preprocessor (like the C Preprocessor for those
familiar with that programming language). It does straight substitutions of
specific substring patterns in the module file. There are two types of
substitutions:
* Replacements that only happen in the module file. These are public
replacement strings that modules can utilize to get helpful boilerplate or
access to arguments.
- :code:`from ansible.module_utils.MOD_LIB_NAME import *` is replaced with the
contents of the :file:`ansible/module_utils/MOD_LIB_NAME.py` These should
only be used with :ref:`new-style Python modules <flow_python_modules>`.
- :code:`#<<INCLUDE_ANSIBLE_MODULE_COMMON>>` is equivalent to
:code:`from ansible.module_utils.basic import *` and should also only apply
to new-style Python modules.
- :code:`# POWERSHELL_COMMON` substitutes the contents of
:file:`ansible/module_utils/powershell.ps1`. It should only be used with
:ref:`new-style Powershell modules <flow_powershell_modules>`.
* Replacements that are used by ``ansible.module_utils`` code. These are internal replacement patterns. They may be used internally, in the above public replacements, but shouldn't be used directly by modules.
- :code:`"<<ANSIBLE_VERSION>>"` is substituted with the Ansible version. In
:ref:`new-style Python modules <flow_python_modules>` under the
:ref:`Ansiballz` framework the proper way is to instead instantiate an
`AnsibleModule` and then access the version from
:attr:``AnsibleModule.ansible_version``.
- :code:`"<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"` is substituted with
a string which is the Python ``repr`` of the :term:`JSON` encoded module
parameters. Using ``repr`` on the JSON string makes it safe to embed in
a Python file. In new-style Python modules under the Ansiballz framework
this is better accessed by instantiating an `AnsibleModule` and
then using :attr:`AnsibleModule.params`.
- :code:`<<SELINUX_SPECIAL_FILESYSTEMS>>` substitutes a string which is
a comma separated list of file systems which have a file system dependent
security context in SELinux. In new-style Python modules, if you really
need this you should instantiate an `AnsibleModule` and then use
:attr:`AnsibleModule._selinux_special_fs`. The variable has also changed
from a comma separated string of file system names to an actual python
list of filesystem names.
- :code:`<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>` substitutes the module
parameters as a JSON string. Care must be taken to properly quote the
string as JSON data may contain quotes. This pattern is not substituted
in new-style Python modules as they can get the module parameters another
way.
- The string :code:`syslog.LOG_USER` is replaced wherever it occurs with the
``syslog_facility`` which was named in :file:`ansible.cfg` or any
``ansible_syslog_facility`` inventory variable that applies to this host. In
new-style Python modules this has changed slightly. If you really need to
access it, you should instantiate an `AnsibleModule` and then use
:attr:`AnsibleModule._syslog_facility` to access it. It is no longer the
actual syslog facility and is now the name of the syslog facility. See
the :ref:`documentation on internal arguments <flow_internal_arguments>`
for details.
.. _Ansiballz:
Ansiballz framework
^^^^^^^^^^^^^^^^^^^
The Ansiballz framework was adopted in Ansible 2.1 and is used for all new-style Python modules. Unlike the Module Replacer, Ansiballz uses real Python imports of things in
:file:`ansible/module_utils` instead of merely preprocessing the module. It
does this by constructing a zipfile -- which includes the module file, files
in :file:`ansible/module_utils` that are imported by the module, and some
boilerplate to pass in the module's parameters. The zipfile is then Base64
encoded and wrapped in a small Python script which decodes the Base64 encoding
and places the zipfile into a temp directory on the managed node. It then
extracts just the Ansible module script from the zip file and places that in
the temporary directory as well. Then it sets the PYTHONPATH to find Python
modules inside of the zip file and imports the Ansible module as the special name, ``__main__``.
Importing it as ``__main__`` causes Python to think that it is executing a script rather than simply
importing a module. This lets Ansible run both the wrapper script and the module code in a single copy of Python on the remote machine.
.. note::
* Ansible wraps the zipfile in the Python script for two reasons:
* for compatibility with Python 2.6 which has a less
functional version of Python's ``-m`` command line switch.
* so that pipelining will function properly. Pipelining needs to pipe the
Python module into the Python interpreter on the remote node. Python
understands scripts on stdin but does not understand zip files.
* Prior to Ansible 2.7, the module was executed via a second Python interpreter instead of being
executed inside of the same process. This change was made once Python-2.4 support was dropped
to speed up module execution.
In Ansiballz, any imports of Python modules from the
:py:mod:`ansible.module_utils` package trigger inclusion of that Python file
into the zipfile. Instances of :code:`#<<INCLUDE_ANSIBLE_MODULE_COMMON>>` in
the module are turned into :code:`from ansible.module_utils.basic import *`
and :file:`ansible/module-utils/basic.py` is then included in the zipfile.
Files that are included from :file:`module_utils` are themselves scanned for
imports of other Python modules from :file:`module_utils` to be included in
the zipfile as well.
.. warning::
At present, the Ansiballz Framework cannot determine whether an import
should be included if it is a relative import. Always use an absolute
import that has :py:mod:`ansible.module_utils` in it to allow Ansiballz to
determine that the file should be included.
.. _flow_passing_module_args:
Passing args
------------
Arguments are passed differently by the two frameworks:
* In :ref:`module_replacer`, module arguments are turned into a JSON-ified string and substituted into the combined module file.
* In :ref:`Ansiballz`, the JSON-ified string is part of the script which wraps the zipfile. Just before the wrapper script imports the Ansible module as ``__main__``, it monkey-patches the private, ``_ANSIBLE_ARGS`` variable in ``basic.py`` with the variable values. When a :class:`ansible.module_utils.basic.AnsibleModule` is instantiated, it parses this string and places the args into :attr:`AnsibleModule.params` where it can be accessed by the module's other code.
.. warning::
If you are writing modules, remember that the way we pass arguments is an internal implementation detail: it has changed in the past and will change again as soon as changes to the common module_utils
code allow Ansible modules to forgo using :class:`ansible.module_utils.basic.AnsibleModule`. Do not rely on the internal global ``_ANSIBLE_ARGS`` variable.
Very dynamic custom modules which need to parse arguments before they
instantiate an ``AnsibleModule`` may use ``_load_params`` to retrieve those parameters.
Although ``_load_params`` may change in breaking ways if necessary to support
changes in the code, it is likely to be more stable than either the way we pass parameters or the internal global variable.
.. note::
Prior to Ansible 2.7, the Ansible module was invoked in a second Python interpreter and the
arguments were then passed to the script over the script's stdin.
.. _flow_internal_arguments:
Internal arguments
------------------
Both :ref:`module_replacer` and :ref:`Ansiballz` send additional arguments to
the module beyond those which the user specified in the playbook. These
additional arguments are internal parameters that help implement global
Ansible features. Modules often do not need to know about these explicitly as
the features are implemented in :py:mod:`ansible.module_utils.basic` but certain
features need support from the module so it's good to know about them.
The internal arguments listed here are global. If you need to add a local internal argument to a custom module, create an action plugin for that specific module - see ``_original_basename`` in the `copy action plugin <https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/copy.py#L329>`_ for an example.
_ansible_no_log
^^^^^^^^^^^^^^^
Boolean. Set to True whenever a parameter in a task or play specifies ``no_log``. Any module that calls :py:meth:`AnsibleModule.log` handles this automatically. If a module implements its own logging then
it needs to check this value. To access in a module, instantiate an
``AnsibleModule`` and then check the value of :attr:`AnsibleModule.no_log`.
.. note::
``no_log`` specified in a module's argument_spec is handled by a different mechanism.
_ansible_debug
^^^^^^^^^^^^^^^
Boolean. Turns more verbose logging on or off and turns on logging of
external commands that the module executes. If a module uses
:py:meth:`AnsibleModule.debug` rather than :py:meth:`AnsibleModule.log` then
the messages are only logged if ``_ansible_debug`` is set to ``True``.
To set, add ``debug: True`` to :file:`ansible.cfg` or set the environment
variable :envvar:`ANSIBLE_DEBUG`. To access in a module, instantiate an
``AnsibleModule`` and access :attr:`AnsibleModule._debug`.
_ansible_diff
^^^^^^^^^^^^^^^
Boolean. If a module supports it, tells the module to show a unified diff of
changes to be made to templated files. To set, pass the ``--diff`` command line
option. To access in a module, instantiate an `AnsibleModule` and access
:attr:`AnsibleModule._diff`.
_ansible_verbosity
^^^^^^^^^^^^^^^^^^
Unused. This value could be used for finer grained control over logging.
_ansible_selinux_special_fs
^^^^^^^^^^^^^^^^^^^^^^^^^^^
List. Names of filesystems which should have a special SELinux
context. They are used by the `AnsibleModule` methods which operate on
files (changing attributes, moving, and copying). To set, add a comma separated string of filesystem names in :file:`ansible.cfg`::
# ansible.cfg
[selinux]
special_context_filesystems=nfs,vboxsf,fuse,ramfs,vfat
Most modules can use the built-in ``AnsibleModule`` methods to manipulate
files. To access in a module that needs to know about these special context filesystems, instantiate an ``AnsibleModule`` and examine the list in
:attr:`AnsibleModule._selinux_special_fs`.
This replaces :attr:`ansible.module_utils.basic.SELINUX_SPECIAL_FS` from
:ref:`module_replacer`. In module replacer it was a comma separated string of
filesystem names. Under Ansiballz it's an actual list.
.. versionadded:: 2.1
_ansible_syslog_facility
^^^^^^^^^^^^^^^^^^^^^^^^
This parameter controls which syslog facility Ansible module logs to. To set, change the ``syslog_facility`` value in :file:`ansible.cfg`. Most
modules should just use :meth:`AnsibleModule.log` which will then make use of
this. If a module has to use this on its own, it should instantiate an
`AnsibleModule` and then retrieve the name of the syslog facility from
:attr:`AnsibleModule._syslog_facility`. The Ansiballz code is less hacky than the old :ref:`module_replacer` code:
.. code-block:: python
# Old module_replacer way
import syslog
syslog.openlog(NAME, 0, syslog.LOG_USER)
# New Ansiballz way
import syslog
facility_name = module._syslog_facility
facility = getattr(syslog, facility_name, syslog.LOG_USER)
syslog.openlog(NAME, 0, facility)
.. versionadded:: 2.1
_ansible_version
^^^^^^^^^^^^^^^^
This parameter passes the version of Ansible that runs the module. To access
it, a module should instantiate an `AnsibleModule` and then retrieve it
from :attr:`AnsibleModule.ansible_version`. This replaces
:attr:`ansible.module_utils.basic.ANSIBLE_VERSION` from
:ref:`module_replacer`.
.. versionadded:: 2.1
.. _flow_module_return_values:
Module return values & Unsafe strings
-------------------------------------
At the end of a module's execution, it formats the data that it wants to return as a JSON string and prints the string to its stdout. The normal action plugin receives the JSON string, parses it into a Python dictionary, and returns it to the executor.
If Ansible templated every string return value, it would be vulnerable to an attack from users with access to managed nodes. If an unscrupulous user disguised malicious code as Ansible return value strings, and if those strings were then templated on the controller, Ansible could execute arbitrary code. To prevent this scenario, Ansible marks all strings inside returned data as ``Unsafe``, emitting any Jinja2 templates in the strings verbatim, not expanded by Jinja2.
Strings returned by invoking a module through ``ActionPlugin._execute_module()`` are automatically marked as ``Unsafe`` by the normal action plugin. If another action plugin retrieves information from a module through some other means, it must mark its return data as ``Unsafe`` on its own.
In case a poorly-coded action plugin fails to mark its results as "Unsafe," Ansible audits the results again when they are returned to the executor,
marking all strings as ``Unsafe``. The normal action plugin protects itself and any other code that it calls with the result data as a parameter. The check inside the executor protects the output of all other action plugins, ensuring that subsequent tasks run by Ansible will not template anything from those results either.
.. _flow_special_considerations:
Special considerations
----------------------
.. _flow_pipelining:
Pipelining
^^^^^^^^^^
Ansible can transfer a module to a remote machine in one of two ways:
* it can write out the module to a temporary file on the remote host and then
use a second connection to the remote host to execute it with the
interpreter that the module needs
* or it can use what's known as pipelining to execute the module by piping it
into the remote interpreter's stdin.
Pipelining only works with modules written in Python at this time because
Ansible only knows that Python supports this mode of operation. Supporting
pipelining means that whatever format the module payload takes before being
sent over the wire must be executable by Python via stdin.
.. _flow_args_over_stdin:
Why pass args over stdin?
^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments via stdin was chosen for the following reasons:
* When combined with :ref:`ANSIBLE_PIPELINING`, this keeps the module's arguments from
temporarily being saved onto disk on the remote machine. This makes it
harder (but not impossible) for a malicious user on the remote machine to
steal any sensitive information that may be present in the arguments.
* Command line arguments would be insecure as most systems allow unprivileged
users to read the full commandline of a process.
* Environment variables are usually more secure than the commandline but some
systems limit the total size of the environment. This could lead to
truncation of the parameters if we hit that limit.
.. _flow_ansiblemodule:
AnsibleModule
-------------
.. _argument_spec:
Argument spec
^^^^^^^^^^^^^
The ``argument_spec`` provided to ``AnsibleModule`` defines the supported arguments for a module, as well as their type, defaults and more.
Example ``argument_spec``:
.. code-block:: python
module = AnsibleModule(argument_spec=dict(
top_level=dict(
type='dict',
options=dict(
second_level=dict(
default=True,
type='bool',
)
)
)
))
This section will discuss the behavioral attributes for arguments:
type
""""
``type`` allows you to define the type of the value accepted for the argument. The default value for ``type`` is ``str``. Possible values are:
* str
* list
* dict
* bool
* int
* float
* path
* raw
* jsonarg
* json
* bytes
* bits
The ``raw`` type, performs no type validation or type casting, and maintains the type of the passed value.
elements
""""""""
``elements`` works in combination with ``type`` when ``type='list'``. ``elements`` can then be defined as ``elements='int'`` or any other type, indicating that each element of the specified list should be of that type.
default
"""""""
The ``default`` option allows sets a default value for the argument for the scenario when the argument is not provided to the module. When not specified, the default value is ``None``.
fallback
""""""""
``fallback`` accepts a ``tuple`` where the first argument is a callable (function) that will be used to perform the lookup, based on the second argument. The second argument is a list of values to be accepted by the callable.
The most common callable used is ``env_fallback`` which will allow an argument to optionally use an environment variable when the argument is not supplied.
Example::
username=dict(fallback=(env_fallback, ['ANSIBLE_NET_USERNAME']))
choices
"""""""
``choices`` accepts a list of choices that the argument will accept. The types of ``choices`` should match the ``type``.
required
""""""""
``required`` accepts a boolean, either ``True`` or ``False`` that indicates that the argument is required. When not specified, ``required`` defaults to ``False``. This should not be used in combination with ``default``.
no_log
""""""
``no_log`` accepts a boolean, either ``True`` or ``False``, that indicates explicitly whether or not the argument value should be masked in logs and output.
.. note::
In the absence of ``no_log``, if the parameter name appears to indicate that the argument value is a password or passphrase (such as "admin_password"), a warning will be shown and the value will be masked in logs but **not** output. To disable the warning and masking for parameters that do not contain sensitive information, set ``no_log`` to ``False``.
aliases
"""""""
``aliases`` accepts a list of alternative argument names for the argument, such as the case where the argument is ``name`` but the module accepts ``aliases=['pkg']`` to allow ``pkg`` to be interchangeably with ``name``
options
"""""""
``options`` implements the ability to create a sub-argument_spec, where the sub options of the top level argument are also validated using the attributes discussed in this section. The example at the top of this section demonstrates use of ``options``. ``type`` or ``elements`` should be ``dict`` is this case.
apply_defaults
""""""""""""""
``apply_defaults`` works alongside ``options`` and allows the ``default`` of the sub-options to be applied even when the top-level argument is not supplied.
In the example of the ``argument_spec`` at the top of this section, it would allow ``module.params['top_level']['second_level']`` to be defined, even if the user does not provide ``top_level`` when calling the module.
removed_in_version
""""""""""""""""""
``removed_in_version`` indicates which version of Ansible a deprecated argument will be removed in.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,316 |
Ansible dnf module is finding wrong dependencies
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Executing a dnf update using the module against RHEL8 machines, returns different output than using dnf command. From the module its returning wrong dependencies and architecture. Note that the changes for dnf.py discussed in https://github.com/ansible/ansible/pull/71726/files#diff-13b7173e93a1dc0b4f6b8e798f858a1f are already performed.
After applying the dnf.py patch, from the same host and using "upgrade-minimal --security --bugfix " flags, the module is reporting two problems in this specific case.
**`Problem 1: cannot install both net-snmp-libs-1:5.8-12.el8_1.2.x86_64 and net-snmp-libs-1:5.8-12.el8_1.x86_64`**
```
- package net-snmp-1:5.8-12.el8_1.2.x86_64 requires net-snmp-libs(x86-64) = 1:5.8-12.el8_1.2, but none of the providers can be installed
- cannot install the best update candidate for package net-snmp-libs-1:5.8-12.el8_1.x86_64
- cannot install the best update candidate for package net-snmp-1:5.8-12.el8_1.x86_64
```
**`Problem 2: systemd-libs-239-18.el8_1.7.i686 has inferior architecture`**
```
- package systemd-239-18.el8_1.7.x86_64 requires systemd-libs = 239-18.el8_1.7, but none of the providers can be installed
- cannot install both systemd-libs-239-18.el8_1.7.x86_64 and systemd-libs-239-18.el8_1.4.x86_64
- cannot install the best update candidate for package systemd-libs-239-18.el8_1.4.x86_64
- cannot install the best update candidate for package systemd-239-18.el8_1.4.x86_64
```
Executing dnf from the command line and same server it returns:
```
dnf upgrade-minimal --disablerepo=* --enablerepo=rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms,satellite-tools-6.7-for-rhel-8-x86_64-rpms
dnf upgrade --security --bugfix --disablerepo=* --enablerepo=rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms,satellite-tools-6.7-for-rhel-8-x86_64-rpms
dnf upgrade-minimal --security --bugfix --disablerepo=* --enablerepo=rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms,satellite-tools-6.7-for-rhel-8-x86_64-rpms
Transaction Summary
================================================================================================================================================
Install 3 Packages
Upgrade 38 Packages
Remove 3 Packages
Total download size: 144 M
Is this ok [y/N]: N
Operation aborted.
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
DNF module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
The following is the list of the enabled repositories.
```
Updating Subscription Management repositories.
Red Hat Satellite Tools 6.7 for RHEL 8 x86_64 (RPMs) 58 kB/s | 2.1 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs) 79 kB/s | 2.8 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs) 64 kB/s | 2.4 kB 00:00
repo id repo name status
rhel-8-for-x86_64-appstream-eus-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Sup 8,977
rhel-8-for-x86_64-baseos-eus-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Suppor 4,002
satellite-tools-6.7-for-rhel-8-x86_64-rpms Red Hat Satellite Tools 6.7 for RHEL 8 x86_64 (RPMs) 19
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
The problem appears updating RHEL8 servers
<!--- Paste example playbooks or commands between quotes below -->
The ansible code used in the task is as follows:
```
- name: Perform security update if not updating to newer minor release
dnf:
name: "*"
disablerepo: "*"
enablerepo: "{{ sat_repos }}"
security: yes
bugfix: yes
state: latest
update_only: yes
register: yum_update
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Packages should be updated in the same way of dnf command.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
<eugbd-lapp0207> (1, '\\r\\n{"msg": "Depsolve Error occured: \\\\n Problem 1: cannot install both net-snmp-libs-1:5.8-12.el8_1.2.x86_64 and net-snmp-libs-1:5.8-12.el8_1.x86_64\\\\n - package net-snmp-1:5.8-12.el8_1.2.x86_64 requires net-snmp-libs(x86-64) = 1:5.8-12.el8_1.2, but none of the providers can be installed\\\\n - cannot install the best update candidate for package net-snmp-libs-1:5.8-12.el8_1.x86_64\\\\n - cannot install the best update candidate for package net-snmp-1:5.8-12.el8_1.x86_64\\\\n Problem 2: systemd-libs-239-18.el8_1.7.i686 has inferior architecture\\\\n - package systemd-239-18.el8_1.7.x86_64 requires systemd-libs = 239-18.el8_1.7, but none of the providers can be installed\\\\n - cannot install both systemd-libs-239-18.el8_1.7.x86_64 and systemd-libs-239-18.el8_1.4.x86_64\\\\n - cannot install the best update candidate for package systemd-libs-239-18.el8_1.4.x86_64\\\\n - cannot install the best update candidate for package systemd-239-18.el8_1.4.x86_64", "failures": [], "results": [], "rc": 1, "failed": true, "exception": " File \\\\"/tmp/ansible_dnf_payload_ef23nwrs/ansible_dnf_payload.zip/ansible/modules/dnf.py\\\\", line 1158, in ensure\\\\n File \\\\"/usr/lib/python3.6/site-packages/dnf/base.py\\\\", line 780, in resolve\\\\n raise exc\\\\n", "invocation": {"module_args": {"update_cache": true, "state": "latest", "disablerepo": ["*"], "name": ["*"], "bugfix": true, "update_only": true, "enablerepo": ["rhel-8-for-x86_64-appstream-eus-rpms", "rhel-8-for-x86_64-baseos-eus-rpms", "satellite-tools-6.7-for-rhel-8-x86_64-rpms"], "security": true, "allow_downgrade": false, "autoremove": false, "disable_gpg_check": false, "disable_plugin": [], "download_only": false, "enable_plugin": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "skip_broken": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "nobest": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\\r\\n', 'Shared connection to eugbd-lapp0207 closed.\\r\\n')
```
|
https://github.com/ansible/ansible/issues/72316
|
https://github.com/ansible/ansible/pull/72483
|
5654de6fceeabb190111d5fb5d3e092a7e5d7f3b
|
d8c637da37ccf8b07b74c1cfb22adff36188a0fd
| 2020-10-23T11:00:49Z |
python
| 2020-11-04T20:13:55Z |
lib/ansible/modules/dnf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
notes:
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
dnf:
name: httpd
state: latest
- name: Install the latest version of Apache and MariaDB
dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install the 'Development tools' package group
dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
rpm_arch_re = re.compile(r'(.*)\.(.*)')
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
rpm_arch_match = rpm_arch_re.match(packagename)
if rpm_arch_match:
nevr, arch = rpm_arch_match.groups()
if arch in redhat_rpm_arches:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
if not HAS_DNF:
if PY2:
package = 'python2-dnf'
else:
package = 'python3-dnf'
if self.module.check_mode:
self.module.fail_json(
msg="`{0}` is not installed, but it is required"
"for the Ansible dnf module.".format(package),
results=[],
)
rc, stdout, stderr = self.module.run_command(['dnf', 'install', '-y', package])
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
except ImportError:
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `{2}` package or ensure you have specified the "
"correct ansible_python_interpreter.".format(sys.executable, sys.version.replace('\n', ''),
package),
results=[],
cmd='dnf install -y {0}'.format(package),
rc=rc,
stdout=stdout,
stderr=stderr,
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
if installed.filter(name=pkg):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else: # Nothing to do, report back
pass
elif is_installed: # An potentially older (or same) version is installed
if upgrade:
self.base.upgrade(pkg_spec)
else: # Nothing to do, report back
pass
else: # The package is not installed, simply install it
self.base.install(pkg_spec)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
available = self.base.sack.query().available()
pkg_spec = available.filter(provides=filepath).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith("@") or ('/' in name):
# like "dnf install /usr/bin/vi"
if '/' in name:
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occured attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = list(map(str, installed.filter(name=pkg_spec).run()))
if installed_pkg:
candidate_pkg = self._packagename_dict(installed_pkg[0])
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
else:
candidate_pkg = self._packagename_dict(pkg_spec)
installed_pkg = installed.filter(nevra=pkg_spec).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 0:
self.base.remove(pkg_spec)
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}'.format(package)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
self.base.do_transaction()
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.exit_json(**response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
@staticmethod
def has_dnf():
return HAS_DNF
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,316 |
Ansible dnf module is finding wrong dependencies
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Executing a dnf update using the module against RHEL8 machines, returns different output than using dnf command. From the module its returning wrong dependencies and architecture. Note that the changes for dnf.py discussed in https://github.com/ansible/ansible/pull/71726/files#diff-13b7173e93a1dc0b4f6b8e798f858a1f are already performed.
After applying the dnf.py patch, from the same host and using "upgrade-minimal --security --bugfix " flags, the module is reporting two problems in this specific case.
**`Problem 1: cannot install both net-snmp-libs-1:5.8-12.el8_1.2.x86_64 and net-snmp-libs-1:5.8-12.el8_1.x86_64`**
```
- package net-snmp-1:5.8-12.el8_1.2.x86_64 requires net-snmp-libs(x86-64) = 1:5.8-12.el8_1.2, but none of the providers can be installed
- cannot install the best update candidate for package net-snmp-libs-1:5.8-12.el8_1.x86_64
- cannot install the best update candidate for package net-snmp-1:5.8-12.el8_1.x86_64
```
**`Problem 2: systemd-libs-239-18.el8_1.7.i686 has inferior architecture`**
```
- package systemd-239-18.el8_1.7.x86_64 requires systemd-libs = 239-18.el8_1.7, but none of the providers can be installed
- cannot install both systemd-libs-239-18.el8_1.7.x86_64 and systemd-libs-239-18.el8_1.4.x86_64
- cannot install the best update candidate for package systemd-libs-239-18.el8_1.4.x86_64
- cannot install the best update candidate for package systemd-239-18.el8_1.4.x86_64
```
Executing dnf from the command line and same server it returns:
```
dnf upgrade-minimal --disablerepo=* --enablerepo=rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms,satellite-tools-6.7-for-rhel-8-x86_64-rpms
dnf upgrade --security --bugfix --disablerepo=* --enablerepo=rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms,satellite-tools-6.7-for-rhel-8-x86_64-rpms
dnf upgrade-minimal --security --bugfix --disablerepo=* --enablerepo=rhel-8-for-x86_64-appstream-eus-rpms,rhel-8-for-x86_64-baseos-eus-rpms,satellite-tools-6.7-for-rhel-8-x86_64-rpms
Transaction Summary
================================================================================================================================================
Install 3 Packages
Upgrade 38 Packages
Remove 3 Packages
Total download size: 144 M
Is this ok [y/N]: N
Operation aborted.
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
DNF module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
The following is the list of the enabled repositories.
```
Updating Subscription Management repositories.
Red Hat Satellite Tools 6.7 for RHEL 8 x86_64 (RPMs) 58 kB/s | 2.1 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs) 79 kB/s | 2.8 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs) 64 kB/s | 2.4 kB 00:00
repo id repo name status
rhel-8-for-x86_64-appstream-eus-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Sup 8,977
rhel-8-for-x86_64-baseos-eus-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Suppor 4,002
satellite-tools-6.7-for-rhel-8-x86_64-rpms Red Hat Satellite Tools 6.7 for RHEL 8 x86_64 (RPMs) 19
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
The problem appears updating RHEL8 servers
<!--- Paste example playbooks or commands between quotes below -->
The ansible code used in the task is as follows:
```
- name: Perform security update if not updating to newer minor release
dnf:
name: "*"
disablerepo: "*"
enablerepo: "{{ sat_repos }}"
security: yes
bugfix: yes
state: latest
update_only: yes
register: yum_update
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Packages should be updated in the same way of dnf command.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
<eugbd-lapp0207> (1, '\\r\\n{"msg": "Depsolve Error occured: \\\\n Problem 1: cannot install both net-snmp-libs-1:5.8-12.el8_1.2.x86_64 and net-snmp-libs-1:5.8-12.el8_1.x86_64\\\\n - package net-snmp-1:5.8-12.el8_1.2.x86_64 requires net-snmp-libs(x86-64) = 1:5.8-12.el8_1.2, but none of the providers can be installed\\\\n - cannot install the best update candidate for package net-snmp-libs-1:5.8-12.el8_1.x86_64\\\\n - cannot install the best update candidate for package net-snmp-1:5.8-12.el8_1.x86_64\\\\n Problem 2: systemd-libs-239-18.el8_1.7.i686 has inferior architecture\\\\n - package systemd-239-18.el8_1.7.x86_64 requires systemd-libs = 239-18.el8_1.7, but none of the providers can be installed\\\\n - cannot install both systemd-libs-239-18.el8_1.7.x86_64 and systemd-libs-239-18.el8_1.4.x86_64\\\\n - cannot install the best update candidate for package systemd-libs-239-18.el8_1.4.x86_64\\\\n - cannot install the best update candidate for package systemd-239-18.el8_1.4.x86_64", "failures": [], "results": [], "rc": 1, "failed": true, "exception": " File \\\\"/tmp/ansible_dnf_payload_ef23nwrs/ansible_dnf_payload.zip/ansible/modules/dnf.py\\\\", line 1158, in ensure\\\\n File \\\\"/usr/lib/python3.6/site-packages/dnf/base.py\\\\", line 780, in resolve\\\\n raise exc\\\\n", "invocation": {"module_args": {"update_cache": true, "state": "latest", "disablerepo": ["*"], "name": ["*"], "bugfix": true, "update_only": true, "enablerepo": ["rhel-8-for-x86_64-appstream-eus-rpms", "rhel-8-for-x86_64-baseos-eus-rpms", "satellite-tools-6.7-for-rhel-8-x86_64-rpms"], "security": true, "allow_downgrade": false, "autoremove": false, "disable_gpg_check": false, "disable_plugin": [], "download_only": false, "enable_plugin": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "skip_broken": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "nobest": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\\r\\n', 'Shared connection to eugbd-lapp0207 closed.\\r\\n')
```
|
https://github.com/ansible/ansible/issues/72316
|
https://github.com/ansible/ansible/pull/72483
|
5654de6fceeabb190111d5fb5d3e092a7e5d7f3b
|
d8c637da37ccf8b07b74c1cfb22adff36188a0fd
| 2020-10-23T11:00:49Z |
python
| 2020-11-04T20:13:55Z |
test/integration/targets/dnf/tasks/filters_check_mode.yml
|
# We have a test repo set up with a valid updateinfo.xml which is referenced
# from its repomd.xml.
- block:
- set_fact:
updateinfo_repo: https://ansible-ci-files.s3.amazonaws.com/test/integration/targets/setup_rpm_repo/repo-with-updateinfo
- name: Install the test repo
yum_repository:
name: test-repo-with-updateinfo
description: test-repo-with-updateinfo
baseurl: "{{ updateinfo_repo }}"
gpgcheck: no
- name: Install old versions of toaster and oven
dnf:
name:
- "{{ updateinfo_repo }}/toaster-1.2.3.4-1.el8.noarch.rpm"
- "{{ updateinfo_repo }}/oven-1.2.3.4-1.el8.noarch.rpm"
disable_gpg_check: true
- name: Ask for pending updates (check_mode)
dnf:
name: '*'
state: latest
update_only: true
disable_gpg_check: true
check_mode: true
register: update_no_filter
- assert:
that:
- update_no_filter is changed
- '"would have if not in check mode" in update_no_filter.msg'
- '"Installed: toaster-1.2.3.5-1.el8.noarch" in update_no_filter.results'
- '"Installed: oven-1.2.3.5-1.el8.noarch" in update_no_filter.results'
- '"Removed: toaster-1.2.3.4-1.el8.noarch" in update_no_filter.results'
- '"Removed: oven-1.2.3.4-1.el8.noarch" in update_no_filter.results'
- name: Ask for pending updates with security=true (check_mode)
dnf:
name: '*'
state: latest
update_only: true
disable_gpg_check: true
security: true
check_mode: true
register: update_security
- assert:
that:
- update_security is changed
- '"would have if not in check mode" in update_security.msg'
- '"Installed: toaster-1.2.3.5-1.el8.noarch" in update_security.results'
- '"Removed: toaster-1.2.3.4-1.el8.noarch" in update_security.results'
- '"Installed: oven-1.2.3.5-1.el8.noarch" not in update_security.results'
- '"Removed: oven-1.2.3.4-1.el8.noarch" not in update_security.results'
- name: Ask for pending updates with bugfix=true (check_mode)
dnf:
name: '*'
state: latest
update_only: true
disable_gpg_check: true
bugfix: true
check_mode: true
register: update_bugfix
- assert:
that:
- update_bugfix is changed
- '"would have if not in check mode" in update_bugfix.msg'
- '"Installed: toaster-1.2.3.5-1.el8.noarch" not in update_bugfix.results'
- '"Removed: toaster-1.2.3.4-1.el8.noarch" not in update_bugfix.results'
- '"Installed: oven-1.2.3.5-1.el8.noarch" in update_bugfix.results'
- '"Removed: oven-1.2.3.4-1.el8.noarch" in update_bugfix.results'
- name: Ask for pending updates with bugfix=true and security=true (check_mode)
dnf:
name: '*'
state: latest
update_only: true
disable_gpg_check: true
bugfix: true
security: true
check_mode: true
register: update_bugfix
- assert:
that:
- update_bugfix is changed
- '"would have if not in check mode" in update_bugfix.msg'
- '"Installed: toaster-1.2.3.5-1.el8.noarch" in update_bugfix.results'
- '"Removed: toaster-1.2.3.4-1.el8.noarch" in update_bugfix.results'
- '"Installed: oven-1.2.3.5-1.el8.noarch" in update_bugfix.results'
- '"Removed: oven-1.2.3.4-1.el8.noarch" in update_bugfix.results'
always:
- name: Remove installed packages
dnf:
name:
- toaster
- oven
state: absent
- name: Remove the repo
yum_repository:
name: test-repo-with-updateinfo
state: absent
tags:
- filters
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,505 |
ansible-playbook --list-tasks omits "<role> : " when role name appears in task name
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using ansible-playbook to list-tasks, it should list the tasks in the format:
```
role : task_name
```
but when the task name matches the role name, the "role : " does not appear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook --list-tasks option.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.10.2
config file = /Users/john.doe/.ansible.cfg
configured module search path = ['/Users/john.doe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.0 (default, Oct 27 2020, 14:13:35) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
ANSIBLE_SSH_ARGS(env: ANSIBLE_SSH_ARGS) = -o PreferredAuthentications=publickey,keyboard-interactive -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = False
DEFAULT_FORKS(/Users/john.doe/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/john.doe/.ansible.cfg) = ['/Users/m.wang/m/Ansible/hw-ansible-modified/inventory.txt']
DEFAULT_LOG_PATH(/Users/john.doe/.ansible.cfg) = /tmp/ansible.log
DEFAULT_MANAGED_STR(/Users/john.doe/.ansible.cfg) = This file is managed by Ansible
DEFAULT_TIMEOUT(/Users/john.doe/.ansible.cfg) = 30
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Darwin Kernel Version 18.7.0 (MacOS Mojave)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
$ cat test.yml
- hosts:
- localhost
roles:
- role: test
$ cat roles/test/tasks/main.yml
- name: this is a tes-t
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
test : this is a tes-t TAGS: []
$ sed -ie 's/tes-t/test/' roles/test/tasks/main.yml
$ cat roles/test/tasks/main.yml
- name: this is a test
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
this is a test TAGS: []
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the same out regardless the task name uses "tes-t" or "test" which is the name of the role.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`<role name> : ` disappeared from the output.
<!--- Paste verbatim command output between quotes -->
```paste below
test : this is a tes-t TAGS: []
vs
this is a test TAGS: []
```
|
https://github.com/ansible/ansible/issues/72505
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-11-05T19:44:57Z |
python
| 2020-11-06T16:46:58Z |
changelogs/fragments/72511-always-prepend-role-to-task-name.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,505 |
ansible-playbook --list-tasks omits "<role> : " when role name appears in task name
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using ansible-playbook to list-tasks, it should list the tasks in the format:
```
role : task_name
```
but when the task name matches the role name, the "role : " does not appear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook --list-tasks option.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.10.2
config file = /Users/john.doe/.ansible.cfg
configured module search path = ['/Users/john.doe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.0 (default, Oct 27 2020, 14:13:35) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
ANSIBLE_SSH_ARGS(env: ANSIBLE_SSH_ARGS) = -o PreferredAuthentications=publickey,keyboard-interactive -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = False
DEFAULT_FORKS(/Users/john.doe/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/john.doe/.ansible.cfg) = ['/Users/m.wang/m/Ansible/hw-ansible-modified/inventory.txt']
DEFAULT_LOG_PATH(/Users/john.doe/.ansible.cfg) = /tmp/ansible.log
DEFAULT_MANAGED_STR(/Users/john.doe/.ansible.cfg) = This file is managed by Ansible
DEFAULT_TIMEOUT(/Users/john.doe/.ansible.cfg) = 30
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Darwin Kernel Version 18.7.0 (MacOS Mojave)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
$ cat test.yml
- hosts:
- localhost
roles:
- role: test
$ cat roles/test/tasks/main.yml
- name: this is a tes-t
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
test : this is a tes-t TAGS: []
$ sed -ie 's/tes-t/test/' roles/test/tasks/main.yml
$ cat roles/test/tasks/main.yml
- name: this is a test
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
this is a test TAGS: []
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the same out regardless the task name uses "tes-t" or "test" which is the name of the role.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`<role name> : ` disappeared from the output.
<!--- Paste verbatim command output between quotes -->
```paste below
test : this is a tes-t TAGS: []
vs
this is a test TAGS: []
```
|
https://github.com/ansible/ansible/issues/72505
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-11-05T19:44:57Z |
python
| 2020-11-06T16:46:58Z |
lib/ansible/playbook/task.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems, string_types
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping
from ansible.plugins.loader import lookup_loader
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.block import Block
from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.conditional import Conditional
from ansible.playbook.loop_control import LoopControl
from ansible.playbook.role import Role
from ansible.playbook.taggable import Taggable
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
__all__ = ['Task']
display = Display()
class Task(Base, Conditional, Taggable, CollectionSearch):
"""
A task is a language feature that represents a call to a module, with given arguments and other parameters.
A handler is a subclass of a task.
Usage:
Task.load(datastructure) -> Task
Task.something(...)
"""
# =================================================================================
# ATTRIBUTES
# load_<attribute_name> and
# validate_<attribute_name>
# will be used if defined
# might be possible to define others
# NOTE: ONLY set defaults on task attributes that are not inheritable,
# inheritance is only triggered if the 'current value' is None,
# default can be set at play/top level object and inheritance will take it's course.
_args = FieldAttribute(isa='dict', default=dict)
_action = FieldAttribute(isa='string')
_async_val = FieldAttribute(isa='int', default=0, alias='async')
_changed_when = FieldAttribute(isa='list', default=list)
_delay = FieldAttribute(isa='int', default=5)
_delegate_to = FieldAttribute(isa='string')
_delegate_facts = FieldAttribute(isa='bool')
_failed_when = FieldAttribute(isa='list', default=list)
_loop = FieldAttribute()
_loop_control = FieldAttribute(isa='class', class_type=LoopControl, inherit=False)
_notify = FieldAttribute(isa='list')
_poll = FieldAttribute(isa='int', default=C.DEFAULT_POLL_INTERVAL)
_register = FieldAttribute(isa='string', static=True)
_retries = FieldAttribute(isa='int', default=3)
_until = FieldAttribute(isa='list', default=list)
# deprecated, used to be loop and loop_args but loop has been repurposed
_loop_with = FieldAttribute(isa='string', private=True, inherit=False)
def __init__(self, block=None, role=None, task_include=None):
''' constructors a task, without the Task.load classmethod, it will be pretty blank '''
# This is a reference of all the candidate action names for transparent execution of module_defaults with redirected content
# This isn't a FieldAttribute to prevent it from being set via the playbook
self._ansible_internal_redirect_list = []
self._role = role
self._parent = None
self.implicit = False
if task_include:
self._parent = task_include
else:
self._parent = block
super(Task, self).__init__()
def get_path(self):
''' return the absolute path of the task with its line number '''
path = ""
if hasattr(self, '_ds') and hasattr(self._ds, '_data_source') and hasattr(self._ds, '_line_number'):
path = "%s:%s" % (self._ds._data_source, self._ds._line_number)
elif hasattr(self._parent._play, '_ds') and hasattr(self._parent._play._ds, '_data_source') and hasattr(self._parent._play._ds, '_line_number'):
path = "%s:%s" % (self._parent._play._ds._data_source, self._parent._play._ds._line_number)
return path
def get_name(self, include_role_fqcn=True):
''' return the name of the task '''
if self._role:
role_name = self._role.get_name(include_role_fqcn=include_role_fqcn)
if self._role and self.name and role_name not in self.name:
return "%s : %s" % (role_name, self.name)
elif self.name:
return self.name
else:
if self._role:
return "%s : %s" % (role_name, self.action)
else:
return "%s" % (self.action,)
def _merge_kv(self, ds):
if ds is None:
return ""
elif isinstance(ds, string_types):
return ds
elif isinstance(ds, dict):
buf = ""
for (k, v) in iteritems(ds):
if k.startswith('_'):
continue
buf = buf + "%s=%s " % (k, v)
buf = buf.strip()
return buf
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
t = Task(block=block, role=role, task_include=task_include)
return t.load_data(data, variable_manager=variable_manager, loader=loader)
def __repr__(self):
''' returns a human readable representation of the task '''
if self.get_name() in C._ACTION_META:
return "TASK: meta (%s)" % self.args['_raw_params']
else:
return "TASK: %s" % self.get_name()
def _preprocess_with_loop(self, ds, new_ds, k, v):
''' take a lookup plugin name and store it correctly '''
loop_name = k.replace("with_", "")
if new_ds.get('loop') is not None or new_ds.get('loop_with') is not None:
raise AnsibleError("duplicate loop in task: %s" % loop_name, obj=ds)
if v is None:
raise AnsibleError("you must specify a value when using %s" % k, obj=ds)
new_ds['loop_with'] = loop_name
new_ds['loop'] = v
# display.deprecated("with_ type loops are being phased out, use the 'loop' keyword instead",
# version="2.10", collection_name='ansible.builtin')
def preprocess_data(self, ds):
'''
tasks are especially complex arguments so need pre-processing.
keep it short.
'''
if not isinstance(ds, dict):
raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds)))
# the new, cleaned datastructure, which will have legacy
# items reduced to a standard structure suitable for the
# attributes of the task class
new_ds = AnsibleMapping()
if isinstance(ds, AnsibleBaseYAMLObject):
new_ds.ansible_pos = ds.ansible_pos
# since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator
default_collection = AnsibleCollectionConfig.default_collection
collections_list = ds.get('collections')
if collections_list is None:
# use the parent value if our ds doesn't define it
collections_list = self.collections
else:
# Validate this untemplated field early on to guarantee we are dealing with a list.
# This is also done in CollectionSearch._load_collections() but this runs before that call.
collections_list = self.get_validated_value('collections', self._collections, collections_list, None)
if default_collection and not self._role: # FIXME: and not a collections role
if collections_list:
if default_collection not in collections_list:
collections_list.insert(0, default_collection)
else:
collections_list = [default_collection]
if collections_list and 'ansible.builtin' not in collections_list and 'ansible.legacy' not in collections_list:
collections_list.append('ansible.legacy')
if collections_list:
ds['collections'] = collections_list
# use the args parsing class to determine the action, args,
# and the delegate_to value from the various possible forms
# supported as legacy
args_parser = ModuleArgsParser(task_ds=ds, collection_list=collections_list)
try:
(action, args, delegate_to) = args_parser.parse()
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e._obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=ds, orig_exc=e)
else:
self._ansible_internal_redirect_list = args_parser.internal_redirect_list[:]
# the command/shell/script modules used to support the `cmd` arg,
# which corresponds to what we now call _raw_params, so move that
# value over to _raw_params (assuming it is empty)
if action in C._ACTION_HAS_CMD:
if 'cmd' in args:
if args.get('_raw_params', '') != '':
raise AnsibleError("The 'cmd' argument cannot be used when other raw parameters are specified."
" Please put everything in one or the other place.", obj=ds)
args['_raw_params'] = args.pop('cmd')
new_ds['action'] = action
new_ds['args'] = args
new_ds['delegate_to'] = delegate_to
# we handle any 'vars' specified in the ds here, as we may
# be adding things to them below (special handling for includes).
# When that deprecated feature is removed, this can be too.
if 'vars' in ds:
# _load_vars is defined in Base, and is used to load a dictionary
# or list of dictionaries in a standard way
new_ds['vars'] = self._load_vars(None, ds.get('vars'))
else:
new_ds['vars'] = dict()
for (k, v) in iteritems(ds):
if k in ('action', 'local_action', 'args', 'delegate_to') or k == action or k == 'shell':
# we don't want to re-assign these values, which were determined by the ModuleArgsParser() above
continue
elif k.startswith('with_') and k.replace("with_", "") in lookup_loader:
# transform into loop property
self._preprocess_with_loop(ds, new_ds, k, v)
else:
# pre-2.0 syntax allowed variables for include statements at the top level of the task,
# so we move those into the 'vars' dictionary here, and show a deprecation message
# as we will remove this at some point in the future.
if action in C._ACTION_INCLUDE and k not in self._valid_attrs and k not in self.DEPRECATED_ATTRIBUTES:
display.deprecated("Specifying include variables at the top-level of the task is deprecated."
" Please see:\nhttps://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse\n\n"
" for currently supported syntax regarding included files and variables",
version="2.12", collection_name='ansible.builtin')
new_ds['vars'][k] = v
elif C.INVALID_TASK_ATTRIBUTE_FAILED or k in self._valid_attrs:
new_ds[k] = v
else:
display.warning("Ignoring invalid attribute: %s" % k)
return super(Task, self).preprocess_data(new_ds)
def _load_loop_control(self, attr, ds):
if not isinstance(ds, dict):
raise AnsibleParserError(
"the `loop_control` value must be specified as a dictionary and cannot "
"be a variable itself (though it can contain variables)",
obj=ds,
)
return LoopControl.load(data=ds, variable_manager=self._variable_manager, loader=self._loader)
def _validate_attributes(self, ds):
try:
super(Task, self)._validate_attributes(ds)
except AnsibleParserError as e:
e.message += '\nThis error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration'
raise e
def post_validate(self, templar):
'''
Override of base class post_validate, to also do final validation on
the block and task include (if any) to which this task belongs.
'''
if self._parent:
self._parent.post_validate(templar)
if AnsibleCollectionConfig.default_collection:
pass
super(Task, self).post_validate(templar)
def _post_validate_loop(self, attr, value, templar):
'''
Override post validation for the loop field, which is templated
specially in the TaskExecutor class when evaluating loops.
'''
return value
def _post_validate_environment(self, attr, value, templar):
'''
Override post validation of vars on the play, as we don't want to
template these too early.
'''
env = {}
if value is not None:
def _parse_env_kv(k, v):
try:
env[k] = templar.template(v, convert_bare=False)
except AnsibleUndefinedVariable as e:
error = to_native(e)
if self.action in C._ACTION_FACT_GATHERING and 'ansible_facts.env' in error or 'ansible_env' in error:
# ignore as fact gathering is required for 'env' facts
return
raise
if isinstance(value, list):
for env_item in value:
if isinstance(env_item, dict):
for k in env_item:
_parse_env_kv(k, env_item[k])
else:
isdict = templar.template(env_item, convert_bare=False)
if isinstance(isdict, dict):
env.update(isdict)
else:
display.warning("could not parse environment value, skipping: %s" % value)
elif isinstance(value, dict):
# should not really happen
env = dict()
for env_item in value:
_parse_env_kv(env_item, value[env_item])
else:
# at this point it should be a simple string, also should not happen
env = templar.template(value, convert_bare=False)
return env
def _post_validate_changed_when(self, attr, value, templar):
'''
changed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_failed_when(self, attr, value, templar):
'''
failed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_until(self, attr, value, templar):
'''
until is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def get_vars(self):
all_vars = dict()
if self._parent:
all_vars.update(self._parent.get_vars())
all_vars.update(self.vars)
if 'tags' in all_vars:
del all_vars['tags']
if 'when' in all_vars:
del all_vars['when']
return all_vars
def get_include_params(self):
all_vars = dict()
if self._parent:
all_vars.update(self._parent.get_include_params())
if self.action in C._ACTION_ALL_INCLUDES:
all_vars.update(self.vars)
return all_vars
def copy(self, exclude_parent=False, exclude_tasks=False):
new_me = super(Task, self).copy()
# if the task has an associated list of candidate names, copy it to the new object too
new_me._ansible_internal_redirect_list = self._ansible_internal_redirect_list[:]
new_me._parent = None
if self._parent and not exclude_parent:
new_me._parent = self._parent.copy(exclude_tasks=exclude_tasks)
new_me._role = None
if self._role:
new_me._role = self._role
new_me.implicit = self.implicit
return new_me
def serialize(self):
data = super(Task, self).serialize()
if not self._squashed and not self._finalized:
if self._parent:
data['parent'] = self._parent.serialize()
data['parent_type'] = self._parent.__class__.__name__
if self._role:
data['role'] = self._role.serialize()
if self._ansible_internal_redirect_list:
data['_ansible_internal_redirect_list'] = self._ansible_internal_redirect_list[:]
data['implicit'] = self.implicit
return data
def deserialize(self, data):
# import is here to avoid import loops
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.handler_task_include import HandlerTaskInclude
parent_data = data.get('parent', None)
if parent_data:
parent_type = data.get('parent_type')
if parent_type == 'Block':
p = Block()
elif parent_type == 'TaskInclude':
p = TaskInclude()
elif parent_type == 'HandlerTaskInclude':
p = HandlerTaskInclude()
p.deserialize(parent_data)
self._parent = p
del data['parent']
role_data = data.get('role')
if role_data:
r = Role()
r.deserialize(role_data)
self._role = r
del data['role']
self._ansible_internal_redirect_list = data.get('_ansible_internal_redirect_list', [])
self.implicit = data.get('implicit', False)
super(Task, self).deserialize(data)
def set_loader(self, loader):
'''
Sets the loader on this object and recursively on parent, child objects.
This is used primarily after the Task has been serialized/deserialized, which
does not preserve the loader.
'''
self._loader = loader
if self._parent:
self._parent.set_loader(loader)
def _get_parent_attribute(self, attr, extend=False, prepend=False):
'''
Generic logic to get the attribute or parent attribute for a task value.
'''
extend = self._valid_attrs[attr].extend
prepend = self._valid_attrs[attr].prepend
try:
value = self._attributes[attr]
# If parent is static, we can grab attrs from the parent
# otherwise, defer to the grandparent
if getattr(self._parent, 'statically_loaded', True):
_parent = self._parent
else:
_parent = self._parent._parent
if _parent and (value is Sentinel or extend):
if getattr(_parent, 'statically_loaded', True):
# vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors
if attr != 'vars' and hasattr(_parent, '_get_parent_attribute'):
parent_value = _parent._get_parent_attribute(attr)
else:
parent_value = _parent._attributes.get(attr, Sentinel)
if extend:
value = self._extend_value(value, parent_value, prepend)
else:
value = parent_value
except KeyError:
pass
return value
def get_dep_chain(self):
if self._parent:
return self._parent.get_dep_chain()
else:
return None
def get_search_path(self):
'''
Return the list of paths you should search for files, in order.
This follows role/playbook dependency chain.
'''
path_stack = []
dep_chain = self.get_dep_chain()
# inside role: add the dependency chain from current to dependent
if dep_chain:
path_stack.extend(reversed([x._role_path for x in dep_chain]))
# add path of task itself, unless it is already in the list
task_dir = os.path.dirname(self.get_path())
if task_dir not in path_stack:
path_stack.append(task_dir)
return path_stack
def all_parents_static(self):
if self._parent:
return self._parent.all_parents_static()
return True
def get_first_parent_include(self):
from ansible.playbook.task_include import TaskInclude
if self._parent:
if isinstance(self._parent, TaskInclude):
return self._parent
return self._parent.get_first_parent_include()
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,505 |
ansible-playbook --list-tasks omits "<role> : " when role name appears in task name
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using ansible-playbook to list-tasks, it should list the tasks in the format:
```
role : task_name
```
but when the task name matches the role name, the "role : " does not appear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook --list-tasks option.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.10.2
config file = /Users/john.doe/.ansible.cfg
configured module search path = ['/Users/john.doe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.0 (default, Oct 27 2020, 14:13:35) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
ANSIBLE_SSH_ARGS(env: ANSIBLE_SSH_ARGS) = -o PreferredAuthentications=publickey,keyboard-interactive -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = False
DEFAULT_FORKS(/Users/john.doe/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/john.doe/.ansible.cfg) = ['/Users/m.wang/m/Ansible/hw-ansible-modified/inventory.txt']
DEFAULT_LOG_PATH(/Users/john.doe/.ansible.cfg) = /tmp/ansible.log
DEFAULT_MANAGED_STR(/Users/john.doe/.ansible.cfg) = This file is managed by Ansible
DEFAULT_TIMEOUT(/Users/john.doe/.ansible.cfg) = 30
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Darwin Kernel Version 18.7.0 (MacOS Mojave)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
$ cat test.yml
- hosts:
- localhost
roles:
- role: test
$ cat roles/test/tasks/main.yml
- name: this is a tes-t
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
test : this is a tes-t TAGS: []
$ sed -ie 's/tes-t/test/' roles/test/tasks/main.yml
$ cat roles/test/tasks/main.yml
- name: this is a test
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
this is a test TAGS: []
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the same out regardless the task name uses "tes-t" or "test" which is the name of the role.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`<role name> : ` disappeared from the output.
<!--- Paste verbatim command output between quotes -->
```paste below
test : this is a tes-t TAGS: []
vs
this is a test TAGS: []
```
|
https://github.com/ansible/ansible/issues/72505
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-11-05T19:44:57Z |
python
| 2020-11-06T16:46:58Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12', collection_name='ansible.builtin')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
saved_name = handler.name
handler.name = handler_name
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
handler.name = saved_name
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, shared_loader_obj=None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,505 |
ansible-playbook --list-tasks omits "<role> : " when role name appears in task name
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using ansible-playbook to list-tasks, it should list the tasks in the format:
```
role : task_name
```
but when the task name matches the role name, the "role : " does not appear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook --list-tasks option.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.10.2
config file = /Users/john.doe/.ansible.cfg
configured module search path = ['/Users/john.doe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.0 (default, Oct 27 2020, 14:13:35) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
ANSIBLE_SSH_ARGS(env: ANSIBLE_SSH_ARGS) = -o PreferredAuthentications=publickey,keyboard-interactive -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = False
DEFAULT_FORKS(/Users/john.doe/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/john.doe/.ansible.cfg) = ['/Users/m.wang/m/Ansible/hw-ansible-modified/inventory.txt']
DEFAULT_LOG_PATH(/Users/john.doe/.ansible.cfg) = /tmp/ansible.log
DEFAULT_MANAGED_STR(/Users/john.doe/.ansible.cfg) = This file is managed by Ansible
DEFAULT_TIMEOUT(/Users/john.doe/.ansible.cfg) = 30
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Darwin Kernel Version 18.7.0 (MacOS Mojave)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
$ cat test.yml
- hosts:
- localhost
roles:
- role: test
$ cat roles/test/tasks/main.yml
- name: this is a tes-t
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
test : this is a tes-t TAGS: []
$ sed -ie 's/tes-t/test/' roles/test/tasks/main.yml
$ cat roles/test/tasks/main.yml
- name: this is a test
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
this is a test TAGS: []
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the same out regardless the task name uses "tes-t" or "test" which is the name of the role.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`<role name> : ` disappeared from the output.
<!--- Paste verbatim command output between quotes -->
```paste below
test : this is a tes-t TAGS: []
vs
this is a test TAGS: []
```
|
https://github.com/ansible/ansible/issues/72505
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-11-05T19:44:57Z |
python
| 2020-11-06T16:46:58Z |
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/roles/common_handlers/handlers/main.yml
|
# This handler should only be called 1 time, if it's called more than once
# this task should fail on subsequent executions
- name: test_fqcn_handler
set_fact:
handler_counter: '{{ handler_counter|int + 1 }}'
failed_when: handler_counter|int > 1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,505 |
ansible-playbook --list-tasks omits "<role> : " when role name appears in task name
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using ansible-playbook to list-tasks, it should list the tasks in the format:
```
role : task_name
```
but when the task name matches the role name, the "role : " does not appear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook --list-tasks option.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.10.2
config file = /Users/john.doe/.ansible.cfg
configured module search path = ['/Users/john.doe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.0 (default, Oct 27 2020, 14:13:35) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
ANSIBLE_SSH_ARGS(env: ANSIBLE_SSH_ARGS) = -o PreferredAuthentications=publickey,keyboard-interactive -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = False
DEFAULT_FORKS(/Users/john.doe/.ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/john.doe/.ansible.cfg) = ['/Users/m.wang/m/Ansible/hw-ansible-modified/inventory.txt']
DEFAULT_LOG_PATH(/Users/john.doe/.ansible.cfg) = /tmp/ansible.log
DEFAULT_MANAGED_STR(/Users/john.doe/.ansible.cfg) = This file is managed by Ansible
DEFAULT_TIMEOUT(/Users/john.doe/.ansible.cfg) = 30
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Darwin Kernel Version 18.7.0 (MacOS Mojave)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
$ cat test.yml
- hosts:
- localhost
roles:
- role: test
$ cat roles/test/tasks/main.yml
- name: this is a tes-t
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
test : this is a tes-t TAGS: []
$ sed -ie 's/tes-t/test/' roles/test/tasks/main.yml
$ cat roles/test/tasks/main.yml
- name: this is a test
become: true
apt: name=make
$ ansible-playbook test.yml --list-tasks
playbook: test.yml
play #1 (localhost): localhost TAGS: []
tasks:
this is a test TAGS: []
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the same out regardless the task name uses "tes-t" or "test" which is the name of the role.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`<role name> : ` disappeared from the output.
<!--- Paste verbatim command output between quotes -->
```paste below
test : this is a tes-t TAGS: []
vs
this is a test TAGS: []
```
|
https://github.com/ansible/ansible/issues/72505
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-11-05T19:44:57Z |
python
| 2020-11-06T16:46:58Z |
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/roles/test_fqcn_handlers/tasks/main.yml
|
- debug:
msg: Fire fqcn handler
changed_when: true
notify:
- 'testns.testcoll.common_handlers : test_fqcn_handler'
- 'common_handlers : test_fqcn_handler'
- 'test_fqcn_handler'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,582 |
Cannot notify handler via role_name : handler_name, when the handler name also contains the role name
|
https://github.com/ansible/ansible/blob/adcdee9bb0031577698246fcfc51f8af63a56a17/lib/ansible/playbook/task.py#L124
##### SUMMARY
Handlers can not be referenced with the `FQCN : handler name` notation if the handler name contains the role name.
Not sure what the reasoning is behind this, but a `httpd` role can't have a `restart httpd` handler name if you want to notify it with `notify: 'httpd : restart httpd'`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task handler fqcn
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0]
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
All (N/A)
##### STEPS TO REPRODUCE
```
➜ issues-70582 $ find ./
./
./roles
./roles/test
./roles/test/handlers
./roles/test/handlers/main.yml
./roles/test/tasks
./roles/test/tasks/main.yml
./pb.yml
```
`pb.yml`
```yaml
---
- hosts: localhost
roles:
- test
```
`roles/test/tasks/main.yml`
```yaml
---
- name: always trigger handler
command: /bin/true
changed_when: True
notify:
- 'test : handler 1'
- name: always trigger handler 2
command: /bin/true
changed_when: True
notify:
- 'test : handler 2 test'
```
`roles/test/handlers/main.yml`
```yaml
---
- name: handler 1
debug:
msg: this is handler 1
- name: handler 2 test
debug:
msg: this handler is not found
```
##### EXPECTED RESULTS
Two handlers are detected.
##### ACTUAL RESULTS
Only the handler that doesn't contain the role name in its name gets detected
```
➜ issues-70582 $ ansible-playbook pb.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] ***********************************************************************************************
TASK [Gathering Facts] *****************************************************************************************
ok: [localhost]
TASK [test : always trigger handler] ***************************************************************************
changed: [localhost]
TASK [test : always trigger handler 2] *************************************************************************
ERROR! The requested handler 'test : handler 2 test' was not found in either the main handlers list nor in the listening handlers list
```
|
https://github.com/ansible/ansible/issues/70582
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-07-12T08:58:13Z |
python
| 2020-11-06T16:46:58Z |
changelogs/fragments/72511-always-prepend-role-to-task-name.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,582 |
Cannot notify handler via role_name : handler_name, when the handler name also contains the role name
|
https://github.com/ansible/ansible/blob/adcdee9bb0031577698246fcfc51f8af63a56a17/lib/ansible/playbook/task.py#L124
##### SUMMARY
Handlers can not be referenced with the `FQCN : handler name` notation if the handler name contains the role name.
Not sure what the reasoning is behind this, but a `httpd` role can't have a `restart httpd` handler name if you want to notify it with `notify: 'httpd : restart httpd'`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task handler fqcn
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0]
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
All (N/A)
##### STEPS TO REPRODUCE
```
➜ issues-70582 $ find ./
./
./roles
./roles/test
./roles/test/handlers
./roles/test/handlers/main.yml
./roles/test/tasks
./roles/test/tasks/main.yml
./pb.yml
```
`pb.yml`
```yaml
---
- hosts: localhost
roles:
- test
```
`roles/test/tasks/main.yml`
```yaml
---
- name: always trigger handler
command: /bin/true
changed_when: True
notify:
- 'test : handler 1'
- name: always trigger handler 2
command: /bin/true
changed_when: True
notify:
- 'test : handler 2 test'
```
`roles/test/handlers/main.yml`
```yaml
---
- name: handler 1
debug:
msg: this is handler 1
- name: handler 2 test
debug:
msg: this handler is not found
```
##### EXPECTED RESULTS
Two handlers are detected.
##### ACTUAL RESULTS
Only the handler that doesn't contain the role name in its name gets detected
```
➜ issues-70582 $ ansible-playbook pb.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] ***********************************************************************************************
TASK [Gathering Facts] *****************************************************************************************
ok: [localhost]
TASK [test : always trigger handler] ***************************************************************************
changed: [localhost]
TASK [test : always trigger handler 2] *************************************************************************
ERROR! The requested handler 'test : handler 2 test' was not found in either the main handlers list nor in the listening handlers list
```
|
https://github.com/ansible/ansible/issues/70582
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-07-12T08:58:13Z |
python
| 2020-11-06T16:46:58Z |
lib/ansible/playbook/task.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems, string_types
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping
from ansible.plugins.loader import lookup_loader
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.block import Block
from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.conditional import Conditional
from ansible.playbook.loop_control import LoopControl
from ansible.playbook.role import Role
from ansible.playbook.taggable import Taggable
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
__all__ = ['Task']
display = Display()
class Task(Base, Conditional, Taggable, CollectionSearch):
"""
A task is a language feature that represents a call to a module, with given arguments and other parameters.
A handler is a subclass of a task.
Usage:
Task.load(datastructure) -> Task
Task.something(...)
"""
# =================================================================================
# ATTRIBUTES
# load_<attribute_name> and
# validate_<attribute_name>
# will be used if defined
# might be possible to define others
# NOTE: ONLY set defaults on task attributes that are not inheritable,
# inheritance is only triggered if the 'current value' is None,
# default can be set at play/top level object and inheritance will take it's course.
_args = FieldAttribute(isa='dict', default=dict)
_action = FieldAttribute(isa='string')
_async_val = FieldAttribute(isa='int', default=0, alias='async')
_changed_when = FieldAttribute(isa='list', default=list)
_delay = FieldAttribute(isa='int', default=5)
_delegate_to = FieldAttribute(isa='string')
_delegate_facts = FieldAttribute(isa='bool')
_failed_when = FieldAttribute(isa='list', default=list)
_loop = FieldAttribute()
_loop_control = FieldAttribute(isa='class', class_type=LoopControl, inherit=False)
_notify = FieldAttribute(isa='list')
_poll = FieldAttribute(isa='int', default=C.DEFAULT_POLL_INTERVAL)
_register = FieldAttribute(isa='string', static=True)
_retries = FieldAttribute(isa='int', default=3)
_until = FieldAttribute(isa='list', default=list)
# deprecated, used to be loop and loop_args but loop has been repurposed
_loop_with = FieldAttribute(isa='string', private=True, inherit=False)
def __init__(self, block=None, role=None, task_include=None):
''' constructors a task, without the Task.load classmethod, it will be pretty blank '''
# This is a reference of all the candidate action names for transparent execution of module_defaults with redirected content
# This isn't a FieldAttribute to prevent it from being set via the playbook
self._ansible_internal_redirect_list = []
self._role = role
self._parent = None
self.implicit = False
if task_include:
self._parent = task_include
else:
self._parent = block
super(Task, self).__init__()
def get_path(self):
''' return the absolute path of the task with its line number '''
path = ""
if hasattr(self, '_ds') and hasattr(self._ds, '_data_source') and hasattr(self._ds, '_line_number'):
path = "%s:%s" % (self._ds._data_source, self._ds._line_number)
elif hasattr(self._parent._play, '_ds') and hasattr(self._parent._play._ds, '_data_source') and hasattr(self._parent._play._ds, '_line_number'):
path = "%s:%s" % (self._parent._play._ds._data_source, self._parent._play._ds._line_number)
return path
def get_name(self, include_role_fqcn=True):
''' return the name of the task '''
if self._role:
role_name = self._role.get_name(include_role_fqcn=include_role_fqcn)
if self._role and self.name and role_name not in self.name:
return "%s : %s" % (role_name, self.name)
elif self.name:
return self.name
else:
if self._role:
return "%s : %s" % (role_name, self.action)
else:
return "%s" % (self.action,)
def _merge_kv(self, ds):
if ds is None:
return ""
elif isinstance(ds, string_types):
return ds
elif isinstance(ds, dict):
buf = ""
for (k, v) in iteritems(ds):
if k.startswith('_'):
continue
buf = buf + "%s=%s " % (k, v)
buf = buf.strip()
return buf
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
t = Task(block=block, role=role, task_include=task_include)
return t.load_data(data, variable_manager=variable_manager, loader=loader)
def __repr__(self):
''' returns a human readable representation of the task '''
if self.get_name() in C._ACTION_META:
return "TASK: meta (%s)" % self.args['_raw_params']
else:
return "TASK: %s" % self.get_name()
def _preprocess_with_loop(self, ds, new_ds, k, v):
''' take a lookup plugin name and store it correctly '''
loop_name = k.replace("with_", "")
if new_ds.get('loop') is not None or new_ds.get('loop_with') is not None:
raise AnsibleError("duplicate loop in task: %s" % loop_name, obj=ds)
if v is None:
raise AnsibleError("you must specify a value when using %s" % k, obj=ds)
new_ds['loop_with'] = loop_name
new_ds['loop'] = v
# display.deprecated("with_ type loops are being phased out, use the 'loop' keyword instead",
# version="2.10", collection_name='ansible.builtin')
def preprocess_data(self, ds):
'''
tasks are especially complex arguments so need pre-processing.
keep it short.
'''
if not isinstance(ds, dict):
raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds)))
# the new, cleaned datastructure, which will have legacy
# items reduced to a standard structure suitable for the
# attributes of the task class
new_ds = AnsibleMapping()
if isinstance(ds, AnsibleBaseYAMLObject):
new_ds.ansible_pos = ds.ansible_pos
# since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator
default_collection = AnsibleCollectionConfig.default_collection
collections_list = ds.get('collections')
if collections_list is None:
# use the parent value if our ds doesn't define it
collections_list = self.collections
else:
# Validate this untemplated field early on to guarantee we are dealing with a list.
# This is also done in CollectionSearch._load_collections() but this runs before that call.
collections_list = self.get_validated_value('collections', self._collections, collections_list, None)
if default_collection and not self._role: # FIXME: and not a collections role
if collections_list:
if default_collection not in collections_list:
collections_list.insert(0, default_collection)
else:
collections_list = [default_collection]
if collections_list and 'ansible.builtin' not in collections_list and 'ansible.legacy' not in collections_list:
collections_list.append('ansible.legacy')
if collections_list:
ds['collections'] = collections_list
# use the args parsing class to determine the action, args,
# and the delegate_to value from the various possible forms
# supported as legacy
args_parser = ModuleArgsParser(task_ds=ds, collection_list=collections_list)
try:
(action, args, delegate_to) = args_parser.parse()
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e._obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=ds, orig_exc=e)
else:
self._ansible_internal_redirect_list = args_parser.internal_redirect_list[:]
# the command/shell/script modules used to support the `cmd` arg,
# which corresponds to what we now call _raw_params, so move that
# value over to _raw_params (assuming it is empty)
if action in C._ACTION_HAS_CMD:
if 'cmd' in args:
if args.get('_raw_params', '') != '':
raise AnsibleError("The 'cmd' argument cannot be used when other raw parameters are specified."
" Please put everything in one or the other place.", obj=ds)
args['_raw_params'] = args.pop('cmd')
new_ds['action'] = action
new_ds['args'] = args
new_ds['delegate_to'] = delegate_to
# we handle any 'vars' specified in the ds here, as we may
# be adding things to them below (special handling for includes).
# When that deprecated feature is removed, this can be too.
if 'vars' in ds:
# _load_vars is defined in Base, and is used to load a dictionary
# or list of dictionaries in a standard way
new_ds['vars'] = self._load_vars(None, ds.get('vars'))
else:
new_ds['vars'] = dict()
for (k, v) in iteritems(ds):
if k in ('action', 'local_action', 'args', 'delegate_to') or k == action or k == 'shell':
# we don't want to re-assign these values, which were determined by the ModuleArgsParser() above
continue
elif k.startswith('with_') and k.replace("with_", "") in lookup_loader:
# transform into loop property
self._preprocess_with_loop(ds, new_ds, k, v)
else:
# pre-2.0 syntax allowed variables for include statements at the top level of the task,
# so we move those into the 'vars' dictionary here, and show a deprecation message
# as we will remove this at some point in the future.
if action in C._ACTION_INCLUDE and k not in self._valid_attrs and k not in self.DEPRECATED_ATTRIBUTES:
display.deprecated("Specifying include variables at the top-level of the task is deprecated."
" Please see:\nhttps://docs.ansible.com/ansible/playbooks_roles.html#task-include-files-and-encouraging-reuse\n\n"
" for currently supported syntax regarding included files and variables",
version="2.12", collection_name='ansible.builtin')
new_ds['vars'][k] = v
elif C.INVALID_TASK_ATTRIBUTE_FAILED or k in self._valid_attrs:
new_ds[k] = v
else:
display.warning("Ignoring invalid attribute: %s" % k)
return super(Task, self).preprocess_data(new_ds)
def _load_loop_control(self, attr, ds):
if not isinstance(ds, dict):
raise AnsibleParserError(
"the `loop_control` value must be specified as a dictionary and cannot "
"be a variable itself (though it can contain variables)",
obj=ds,
)
return LoopControl.load(data=ds, variable_manager=self._variable_manager, loader=self._loader)
def _validate_attributes(self, ds):
try:
super(Task, self)._validate_attributes(ds)
except AnsibleParserError as e:
e.message += '\nThis error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration'
raise e
def post_validate(self, templar):
'''
Override of base class post_validate, to also do final validation on
the block and task include (if any) to which this task belongs.
'''
if self._parent:
self._parent.post_validate(templar)
if AnsibleCollectionConfig.default_collection:
pass
super(Task, self).post_validate(templar)
def _post_validate_loop(self, attr, value, templar):
'''
Override post validation for the loop field, which is templated
specially in the TaskExecutor class when evaluating loops.
'''
return value
def _post_validate_environment(self, attr, value, templar):
'''
Override post validation of vars on the play, as we don't want to
template these too early.
'''
env = {}
if value is not None:
def _parse_env_kv(k, v):
try:
env[k] = templar.template(v, convert_bare=False)
except AnsibleUndefinedVariable as e:
error = to_native(e)
if self.action in C._ACTION_FACT_GATHERING and 'ansible_facts.env' in error or 'ansible_env' in error:
# ignore as fact gathering is required for 'env' facts
return
raise
if isinstance(value, list):
for env_item in value:
if isinstance(env_item, dict):
for k in env_item:
_parse_env_kv(k, env_item[k])
else:
isdict = templar.template(env_item, convert_bare=False)
if isinstance(isdict, dict):
env.update(isdict)
else:
display.warning("could not parse environment value, skipping: %s" % value)
elif isinstance(value, dict):
# should not really happen
env = dict()
for env_item in value:
_parse_env_kv(env_item, value[env_item])
else:
# at this point it should be a simple string, also should not happen
env = templar.template(value, convert_bare=False)
return env
def _post_validate_changed_when(self, attr, value, templar):
'''
changed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_failed_when(self, attr, value, templar):
'''
failed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_until(self, attr, value, templar):
'''
until is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def get_vars(self):
all_vars = dict()
if self._parent:
all_vars.update(self._parent.get_vars())
all_vars.update(self.vars)
if 'tags' in all_vars:
del all_vars['tags']
if 'when' in all_vars:
del all_vars['when']
return all_vars
def get_include_params(self):
all_vars = dict()
if self._parent:
all_vars.update(self._parent.get_include_params())
if self.action in C._ACTION_ALL_INCLUDES:
all_vars.update(self.vars)
return all_vars
def copy(self, exclude_parent=False, exclude_tasks=False):
new_me = super(Task, self).copy()
# if the task has an associated list of candidate names, copy it to the new object too
new_me._ansible_internal_redirect_list = self._ansible_internal_redirect_list[:]
new_me._parent = None
if self._parent and not exclude_parent:
new_me._parent = self._parent.copy(exclude_tasks=exclude_tasks)
new_me._role = None
if self._role:
new_me._role = self._role
new_me.implicit = self.implicit
return new_me
def serialize(self):
data = super(Task, self).serialize()
if not self._squashed and not self._finalized:
if self._parent:
data['parent'] = self._parent.serialize()
data['parent_type'] = self._parent.__class__.__name__
if self._role:
data['role'] = self._role.serialize()
if self._ansible_internal_redirect_list:
data['_ansible_internal_redirect_list'] = self._ansible_internal_redirect_list[:]
data['implicit'] = self.implicit
return data
def deserialize(self, data):
# import is here to avoid import loops
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.handler_task_include import HandlerTaskInclude
parent_data = data.get('parent', None)
if parent_data:
parent_type = data.get('parent_type')
if parent_type == 'Block':
p = Block()
elif parent_type == 'TaskInclude':
p = TaskInclude()
elif parent_type == 'HandlerTaskInclude':
p = HandlerTaskInclude()
p.deserialize(parent_data)
self._parent = p
del data['parent']
role_data = data.get('role')
if role_data:
r = Role()
r.deserialize(role_data)
self._role = r
del data['role']
self._ansible_internal_redirect_list = data.get('_ansible_internal_redirect_list', [])
self.implicit = data.get('implicit', False)
super(Task, self).deserialize(data)
def set_loader(self, loader):
'''
Sets the loader on this object and recursively on parent, child objects.
This is used primarily after the Task has been serialized/deserialized, which
does not preserve the loader.
'''
self._loader = loader
if self._parent:
self._parent.set_loader(loader)
def _get_parent_attribute(self, attr, extend=False, prepend=False):
'''
Generic logic to get the attribute or parent attribute for a task value.
'''
extend = self._valid_attrs[attr].extend
prepend = self._valid_attrs[attr].prepend
try:
value = self._attributes[attr]
# If parent is static, we can grab attrs from the parent
# otherwise, defer to the grandparent
if getattr(self._parent, 'statically_loaded', True):
_parent = self._parent
else:
_parent = self._parent._parent
if _parent and (value is Sentinel or extend):
if getattr(_parent, 'statically_loaded', True):
# vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors
if attr != 'vars' and hasattr(_parent, '_get_parent_attribute'):
parent_value = _parent._get_parent_attribute(attr)
else:
parent_value = _parent._attributes.get(attr, Sentinel)
if extend:
value = self._extend_value(value, parent_value, prepend)
else:
value = parent_value
except KeyError:
pass
return value
def get_dep_chain(self):
if self._parent:
return self._parent.get_dep_chain()
else:
return None
def get_search_path(self):
'''
Return the list of paths you should search for files, in order.
This follows role/playbook dependency chain.
'''
path_stack = []
dep_chain = self.get_dep_chain()
# inside role: add the dependency chain from current to dependent
if dep_chain:
path_stack.extend(reversed([x._role_path for x in dep_chain]))
# add path of task itself, unless it is already in the list
task_dir = os.path.dirname(self.get_path())
if task_dir not in path_stack:
path_stack.append(task_dir)
return path_stack
def all_parents_static(self):
if self._parent:
return self._parent.all_parents_static()
return True
def get_first_parent_include(self):
from ansible.playbook.task_include import TaskInclude
if self._parent:
if isinstance(self._parent, TaskInclude):
return self._parent
return self._parent.get_first_parent_include()
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,582 |
Cannot notify handler via role_name : handler_name, when the handler name also contains the role name
|
https://github.com/ansible/ansible/blob/adcdee9bb0031577698246fcfc51f8af63a56a17/lib/ansible/playbook/task.py#L124
##### SUMMARY
Handlers can not be referenced with the `FQCN : handler name` notation if the handler name contains the role name.
Not sure what the reasoning is behind this, but a `httpd` role can't have a `restart httpd` handler name if you want to notify it with `notify: 'httpd : restart httpd'`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task handler fqcn
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0]
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
All (N/A)
##### STEPS TO REPRODUCE
```
➜ issues-70582 $ find ./
./
./roles
./roles/test
./roles/test/handlers
./roles/test/handlers/main.yml
./roles/test/tasks
./roles/test/tasks/main.yml
./pb.yml
```
`pb.yml`
```yaml
---
- hosts: localhost
roles:
- test
```
`roles/test/tasks/main.yml`
```yaml
---
- name: always trigger handler
command: /bin/true
changed_when: True
notify:
- 'test : handler 1'
- name: always trigger handler 2
command: /bin/true
changed_when: True
notify:
- 'test : handler 2 test'
```
`roles/test/handlers/main.yml`
```yaml
---
- name: handler 1
debug:
msg: this is handler 1
- name: handler 2 test
debug:
msg: this handler is not found
```
##### EXPECTED RESULTS
Two handlers are detected.
##### ACTUAL RESULTS
Only the handler that doesn't contain the role name in its name gets detected
```
➜ issues-70582 $ ansible-playbook pb.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] ***********************************************************************************************
TASK [Gathering Facts] *****************************************************************************************
ok: [localhost]
TASK [test : always trigger handler] ***************************************************************************
changed: [localhost]
TASK [test : always trigger handler 2] *************************************************************************
ERROR! The requested handler 'test : handler 2 test' was not found in either the main handlers list nor in the listening handlers list
```
|
https://github.com/ansible/ansible/issues/70582
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-07-12T08:58:13Z |
python
| 2020-11-06T16:46:58Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of ITERATING_TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role.get_name()]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12', collection_name='ansible.builtin')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
saved_name = handler.name
handler.name = handler_name
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
handler.name = saved_name
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, shared_loader_obj=None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,582 |
Cannot notify handler via role_name : handler_name, when the handler name also contains the role name
|
https://github.com/ansible/ansible/blob/adcdee9bb0031577698246fcfc51f8af63a56a17/lib/ansible/playbook/task.py#L124
##### SUMMARY
Handlers can not be referenced with the `FQCN : handler name` notation if the handler name contains the role name.
Not sure what the reasoning is behind this, but a `httpd` role can't have a `restart httpd` handler name if you want to notify it with `notify: 'httpd : restart httpd'`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task handler fqcn
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0]
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
All (N/A)
##### STEPS TO REPRODUCE
```
➜ issues-70582 $ find ./
./
./roles
./roles/test
./roles/test/handlers
./roles/test/handlers/main.yml
./roles/test/tasks
./roles/test/tasks/main.yml
./pb.yml
```
`pb.yml`
```yaml
---
- hosts: localhost
roles:
- test
```
`roles/test/tasks/main.yml`
```yaml
---
- name: always trigger handler
command: /bin/true
changed_when: True
notify:
- 'test : handler 1'
- name: always trigger handler 2
command: /bin/true
changed_when: True
notify:
- 'test : handler 2 test'
```
`roles/test/handlers/main.yml`
```yaml
---
- name: handler 1
debug:
msg: this is handler 1
- name: handler 2 test
debug:
msg: this handler is not found
```
##### EXPECTED RESULTS
Two handlers are detected.
##### ACTUAL RESULTS
Only the handler that doesn't contain the role name in its name gets detected
```
➜ issues-70582 $ ansible-playbook pb.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] ***********************************************************************************************
TASK [Gathering Facts] *****************************************************************************************
ok: [localhost]
TASK [test : always trigger handler] ***************************************************************************
changed: [localhost]
TASK [test : always trigger handler 2] *************************************************************************
ERROR! The requested handler 'test : handler 2 test' was not found in either the main handlers list nor in the listening handlers list
```
|
https://github.com/ansible/ansible/issues/70582
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-07-12T08:58:13Z |
python
| 2020-11-06T16:46:58Z |
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/roles/common_handlers/handlers/main.yml
|
# This handler should only be called 1 time, if it's called more than once
# this task should fail on subsequent executions
- name: test_fqcn_handler
set_fact:
handler_counter: '{{ handler_counter|int + 1 }}'
failed_when: handler_counter|int > 1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,582 |
Cannot notify handler via role_name : handler_name, when the handler name also contains the role name
|
https://github.com/ansible/ansible/blob/adcdee9bb0031577698246fcfc51f8af63a56a17/lib/ansible/playbook/task.py#L124
##### SUMMARY
Handlers can not be referenced with the `FQCN : handler name` notation if the handler name contains the role name.
Not sure what the reasoning is behind this, but a `httpd` role can't have a `restart httpd` handler name if you want to notify it with `notify: 'httpd : restart httpd'`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task handler fqcn
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0]
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
All (N/A)
##### STEPS TO REPRODUCE
```
➜ issues-70582 $ find ./
./
./roles
./roles/test
./roles/test/handlers
./roles/test/handlers/main.yml
./roles/test/tasks
./roles/test/tasks/main.yml
./pb.yml
```
`pb.yml`
```yaml
---
- hosts: localhost
roles:
- test
```
`roles/test/tasks/main.yml`
```yaml
---
- name: always trigger handler
command: /bin/true
changed_when: True
notify:
- 'test : handler 1'
- name: always trigger handler 2
command: /bin/true
changed_when: True
notify:
- 'test : handler 2 test'
```
`roles/test/handlers/main.yml`
```yaml
---
- name: handler 1
debug:
msg: this is handler 1
- name: handler 2 test
debug:
msg: this handler is not found
```
##### EXPECTED RESULTS
Two handlers are detected.
##### ACTUAL RESULTS
Only the handler that doesn't contain the role name in its name gets detected
```
➜ issues-70582 $ ansible-playbook pb.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] ***********************************************************************************************
TASK [Gathering Facts] *****************************************************************************************
ok: [localhost]
TASK [test : always trigger handler] ***************************************************************************
changed: [localhost]
TASK [test : always trigger handler 2] *************************************************************************
ERROR! The requested handler 'test : handler 2 test' was not found in either the main handlers list nor in the listening handlers list
```
|
https://github.com/ansible/ansible/issues/70582
|
https://github.com/ansible/ansible/pull/72511
|
c8590c7482dcfc40f7054f629b7b6179f9e38daf
|
0ed7bfc694e5e2efe49fa0e1c8fea0a392c78c04
| 2020-07-12T08:58:13Z |
python
| 2020-11-06T16:46:58Z |
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/roles/test_fqcn_handlers/tasks/main.yml
|
- debug:
msg: Fire fqcn handler
changed_when: true
notify:
- 'testns.testcoll.common_handlers : test_fqcn_handler'
- 'common_handlers : test_fqcn_handler'
- 'test_fqcn_handler'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
changelogs/fragments/ansible-test-freebsd12-2.yml
|
minor_changes:
- ansible-test - Now supports freebsd/12.2 remote (https://github.com/ansible/ansible/issues/72366).
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
shippable.yml
|
language: python
env:
matrix:
- T=none
matrix:
exclude:
- env: T=none
include:
- env: T=sanity/1
- env: T=sanity/2
- env: T=sanity/3
- env: T=sanity/4
- env: T=sanity/5
- env: T=units/2.6
- env: T=units/2.7
- env: T=units/3.5
- env: T=units/3.6
- env: T=units/3.7
- env: T=units/3.8
- env: T=units/3.9
- env: T=windows/2012/1
- env: T=windows/2012-R2/1
- env: T=windows/2016/1
- env: T=windows/2019/1
- env: T=macos/10.15/1
- env: T=rhel/7.8/1
- env: T=rhel/8.2/1
- env: T=freebsd/11.1/1
- env: T=freebsd/12.2/1
- env: T=linux/alpine3/1
- env: T=linux/centos6/1
- env: T=linux/centos7/1
- env: T=linux/centos8/1
- env: T=linux/fedora31/1
- env: T=linux/fedora32/1
- env: T=linux/opensuse15py2/1
- env: T=linux/opensuse15/1
- env: T=linux/ubuntu1604/1
- env: T=linux/ubuntu1804/1
- env: T=macos/10.15/2
- env: T=rhel/7.8/2
- env: T=rhel/8.2/2
- env: T=freebsd/11.1/2
- env: T=freebsd/12.2/2
- env: T=linux/alpine3/2
- env: T=linux/centos6/2
- env: T=linux/centos7/2
- env: T=linux/centos8/2
- env: T=linux/fedora31/2
- env: T=linux/fedora32/2
- env: T=linux/opensuse15py2/2
- env: T=linux/opensuse15/2
- env: T=linux/ubuntu1604/2
- env: T=linux/ubuntu1804/2
- env: T=macos/10.15/3
- env: T=rhel/7.8/3
- env: T=rhel/8.2/3
- env: T=freebsd/11.1/3
- env: T=freebsd/12.2/3
- env: T=linux/alpine3/3
- env: T=linux/centos6/3
- env: T=linux/centos7/3
- env: T=linux/centos8/3
- env: T=linux/fedora31/3
- env: T=linux/fedora32/3
- env: T=linux/opensuse15py2/3
- env: T=linux/opensuse15/3
- env: T=linux/ubuntu1604/3
- env: T=linux/ubuntu1804/3
- env: T=macos/10.15/4
- env: T=rhel/7.8/4
- env: T=rhel/8.2/4
- env: T=freebsd/11.1/4
- env: T=freebsd/12.2/4
- env: T=linux/alpine3/4
- env: T=linux/centos6/4
- env: T=linux/centos7/4
- env: T=linux/centos8/4
- env: T=linux/fedora31/4
- env: T=linux/fedora32/4
- env: T=linux/opensuse15py2/4
- env: T=linux/opensuse15/4
- env: T=linux/ubuntu1604/4
- env: T=linux/ubuntu1804/4
- env: T=macos/10.15/5
- env: T=rhel/7.8/5
- env: T=rhel/8.2/5
- env: T=freebsd/11.1/5
- env: T=freebsd/12.2/5
- env: T=linux/alpine3/5
- env: T=linux/centos6/5
- env: T=linux/centos7/5
- env: T=linux/centos8/5
- env: T=linux/fedora31/5
- env: T=linux/fedora32/5
- env: T=linux/opensuse15py2/5
- env: T=linux/opensuse15/5
- env: T=linux/ubuntu1604/5
- env: T=linux/ubuntu1804/5
- env: T=galaxy/2.7/1
- env: T=galaxy/3.6/1
- env: T=generic/2.7/1
- env: T=generic/3.6/1
- env: T=i/osx/10.11
- env: T=i/rhel/7.8
- env: T=i/rhel/8.2
- env: T=i/freebsd/11.1
- env: T=i/freebsd/12.2
- env: T=i/linux/centos6
- env: T=i/linux/centos7
- env: T=i/linux/centos8
- env: T=i/linux/fedora31
- env: T=i/linux/fedora32
- env: T=i/linux/opensuse15py2
- env: T=i/linux/opensuse15
- env: T=i/linux/ubuntu1604
- env: T=i/linux/ubuntu1804
- env: T=i/windows/2012
- env: T=i/windows/2012-R2
- env: T=i/windows/2016
- env: T=i/windows/2019
- env: T=i/ios/csr1000v//1
- env: T=i/vyos/1.1.8/2.7/1
- env: T=i/vyos/1.1.8/3.6/1
- env: T=i/aws/2.7/1
- env: T=i/aws/3.6/1
- env: T=i/cloud//1
branches:
except:
- "*-patch-*"
- "revert-*-*"
build:
pre_ci_boot:
image_name: quay.io/ansible/shippable-build-container
image_tag: 6.10.4.0
pull: true
options: --privileged=true --net=bridge
ci:
- test/utils/shippable/timing.sh test/utils/shippable/shippable.sh $T
integrations:
notifications:
- integrationName: email
type: email
on_success: never
on_failure: never
on_start: never
on_pull_request: never
- integrationName: irc
type: irc
recipients:
- "chat.freenode.net#ansible-notices"
on_success: change
on_failure: always
on_start: never
on_pull_request: always
- integrationName: slack
type: slack
recipients:
- "#shippable"
on_success: change
on_failure: always
on_start: never
on_pull_request: never
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
test/integration/targets/setup_paramiko/install-FreeBSD-12.2-python-3.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
test/integration/targets/setup_paramiko/install.yml
|
- hosts: localhost
tasks:
- name: Detect Paramiko
detect_paramiko:
register: detect_paramiko
- name: Persist Result
copy:
content: "{{ detect_paramiko }}"
dest: "{{ lookup('env', 'OUTPUT_DIR') }}/detect-paramiko.json"
- name: Install Paramiko
when: not detect_paramiko.found
include_tasks: "{{ item }}"
with_first_found:
- "install-{{ ansible_distribution }}-{{ ansible_distribution_major_version }}-python-{{ ansible_python.version.major }}.yml"
- "install-{{ ansible_os_family }}-{{ ansible_distribution_major_version }}-python-{{ ansible_python.version.major }}.yml"
- "install-python-{{ ansible_python.version.major }}.yml"
- "install-fail.yml"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
test/integration/targets/setup_paramiko/uninstall-FreeBSD-12.2-python-3.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
test/integration/targets/setup_paramiko/uninstall.yml
|
- hosts: localhost
vars:
detect_paramiko: '{{ lookup("file", lookup("env", "OUTPUT_DIR") + "/detect-paramiko.json") | from_json }}'
tasks:
- name: Uninstall Paramiko and Verify Results
when: not detect_paramiko.found
block:
- name: Uninstall Paramiko
include_tasks: "{{ item }}"
with_first_found:
- "uninstall-{{ ansible_distribution }}-{{ ansible_distribution_major_version }}-python-{{ ansible_python.version.major }}.yml"
- "uninstall-{{ ansible_os_family }}-{{ ansible_distribution_major_version }}-python-{{ ansible_python.version.major }}.yml"
- "uninstall-{{ ansible_pkg_mgr }}-python-{{ ansible_python.version.major }}.yml"
- "uninstall-{{ ansible_pkg_mgr }}.yml"
- "uninstall-fail.yml"
- name: Verify Paramiko was uninstalled
detect_paramiko:
register: detect_paramiko
failed_when: detect_paramiko.found
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,366 |
Add CI platform: freebsd/12.2
|
##### SUMMARY
Replace the `freebsd/12.1` platform in the test matrix with `freebsd/12.2`.
FreeBSD 12.2 was released in October, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72366
|
https://github.com/ansible/ansible/pull/72524
|
7a08efc54684fab81a91b0e2f826fd8dd52ec9da
|
11b7091c84ed6cf9576f319118b88f2a81894764
| 2020-10-27T23:41:35Z |
python
| 2020-11-09T18:09:53Z |
test/lib/ansible_test/_data/completion/remote.txt
|
freebsd/11.1 python=2.7,3.6 python_dir=/usr/local/bin
freebsd/12.1 python=3.6,2.7 python_dir=/usr/local/bin
freebsd/12.2 python=3.7,2.7 python_dir=/usr/local/bin
osx/10.11 python=2.7 python_dir=/usr/local/bin
macos/10.15 python=3.8 python_dir=/usr/local/bin
rhel/7.6 python=2.7
rhel/7.8 python=2.7
rhel/8.1 python=3.6
rhel/8.2 python=3.6
aix/7.2 python=2.7 httptester=disabled temp-unicode=disabled pip-check=disabled
power/centos/7 python=2.7
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,444 |
iptables comment parameter incorrectly positioned in iptables
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using **comment** parameter with iptables module, it is not positioned as the right-most object in the actual iptables table.
The outcome looks very bad at the moment.
##### ISSUE TYPE
- Cosmetic Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
**iptables** module, **comment** parameter
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.9
config file = /home/me/project/ansible.cfg
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/me/vConstruction/lib/python3.6/site-packages/ansible
executable location = /home/me/vConstruction/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/home/me/project/ansible.cfg) = ['profile_tasks']
DEFAULT_HOST_LIST(/home/me/project/ansible.cfg) = ['/home/me/project/inventory']
DEPRECATION_WARNINGS(/home/me/project/ansible.cfg) = False
HOST_KEY_CHECKING(/home/me/project/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Host where I use Ansible: Ubuntu 18.04 via WSL 2.
Target VM where the iptables rules are being created: Ubuntu 18.04.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
It can be reproduced by using the following task and running `iptables -L` on the target machine.
I added the DNS task as well, because this one looks correct, but I'm sure that's just because I don't have **ctstate** specified.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create HTTPS rule
iptables:
chain: INPUT
in_interface: eth0
protocol: tcp
destination_port: https
ctstate: NEW
jump: ACCEPT
comment: Accept new inbound HTTPS connections to eth0 interface
- name: Create DNS rule
iptables:
chain: INPUT
protocol: udp
destination_port: domain
jump: ACCEPT
comment: Accept inbound DNS connections
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```bash
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW /* Accept new inbound HTTPS connections to eth0 interface */
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```bash
ACCEPT tcp -- anywhere anywhere tcp dpt:https /* Accept new inbound HTTPS connections to eth0 interface */ ctstate NEW
ACCEPT udp -- anywhere anywhere udp dpt:domain /* Accept inbound DNS connections */
```
|
https://github.com/ansible/ansible/issues/71444
|
https://github.com/ansible/ansible/pull/71496
|
11b7091c84ed6cf9576f319118b88f2a81894764
|
c1da427a5ec678f052fd2cd4885840c4d761946a
| 2020-08-25T16:48:26Z |
python
| 2020-11-09T18:40:55Z |
changelogs/fragments/71496-iptables-reorder-comment-position.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,444 |
iptables comment parameter incorrectly positioned in iptables
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using **comment** parameter with iptables module, it is not positioned as the right-most object in the actual iptables table.
The outcome looks very bad at the moment.
##### ISSUE TYPE
- Cosmetic Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
**iptables** module, **comment** parameter
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.9
config file = /home/me/project/ansible.cfg
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/me/vConstruction/lib/python3.6/site-packages/ansible
executable location = /home/me/vConstruction/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/home/me/project/ansible.cfg) = ['profile_tasks']
DEFAULT_HOST_LIST(/home/me/project/ansible.cfg) = ['/home/me/project/inventory']
DEPRECATION_WARNINGS(/home/me/project/ansible.cfg) = False
HOST_KEY_CHECKING(/home/me/project/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Host where I use Ansible: Ubuntu 18.04 via WSL 2.
Target VM where the iptables rules are being created: Ubuntu 18.04.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
It can be reproduced by using the following task and running `iptables -L` on the target machine.
I added the DNS task as well, because this one looks correct, but I'm sure that's just because I don't have **ctstate** specified.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create HTTPS rule
iptables:
chain: INPUT
in_interface: eth0
protocol: tcp
destination_port: https
ctstate: NEW
jump: ACCEPT
comment: Accept new inbound HTTPS connections to eth0 interface
- name: Create DNS rule
iptables:
chain: INPUT
protocol: udp
destination_port: domain
jump: ACCEPT
comment: Accept inbound DNS connections
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```bash
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW /* Accept new inbound HTTPS connections to eth0 interface */
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```bash
ACCEPT tcp -- anywhere anywhere tcp dpt:https /* Accept new inbound HTTPS connections to eth0 interface */ ctstate NEW
ACCEPT udp -- anywhere anywhere udp dpt:domain /* Accept inbound DNS connections */
```
|
https://github.com/ansible/ansible/issues/71444
|
https://github.com/ansible/ansible/pull/71496
|
11b7091c84ed6cf9576f319118b88f2a81894764
|
c1da427a5ec678f052fd2cd4885840c4d761946a
| 2020-08-25T16:48:26Z |
python
| 2020-11-09T18:40:55Z |
lib/ansible/modules/iptables.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Linus Unnebäck <[email protected]>
# Copyright: (c) 2017, Sébastien DA ROCHA <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: iptables
short_description: Modify iptables rules
version_added: "2.0"
author:
- Linus Unnebäck (@LinusU) <[email protected]>
- Sébastien DA ROCHA (@sebastiendarocha)
description:
- C(iptables) is used to set up, maintain, and inspect the tables of IP packet
filter rules in the Linux kernel.
- This module does not handle the saving and/or loading of rules, but rather
only manipulates the current rules that are present in memory. This is the
same as the behaviour of the C(iptables) and C(ip6tables) command which
this module uses internally.
notes:
- This module just deals with individual rules.If you need advanced
chaining of rules the recommended way is to template the iptables restore
file.
options:
table:
description:
- This option specifies the packet matching table which the command should operate on.
- If the kernel is configured with automatic module loading, an attempt will be made
to load the appropriate module for that table if it is not already there.
type: str
choices: [ filter, nat, mangle, raw, security ]
default: filter
state:
description:
- Whether the rule should be absent or present.
type: str
choices: [ absent, present ]
default: present
action:
description:
- Whether the rule should be appended at the bottom or inserted at the top.
- If the rule already exists the chain will not be modified.
type: str
choices: [ append, insert ]
default: append
version_added: "2.2"
rule_num:
description:
- Insert the rule as the given rule number.
- This works only with C(action=insert).
type: str
version_added: "2.5"
ip_version:
description:
- Which version of the IP protocol this rule should apply to.
type: str
choices: [ ipv4, ipv6 ]
default: ipv4
chain:
description:
- Specify the iptables chain to modify.
- This could be a user-defined chain or one of the standard iptables chains, like
C(INPUT), C(FORWARD), C(OUTPUT), C(PREROUTING), C(POSTROUTING), C(SECMARK) or C(CONNSECMARK).
type: str
protocol:
description:
- The protocol of the rule or of the packet to check.
- The specified protocol can be one of C(tcp), C(udp), C(udplite), C(icmp), C(ipv6-icmp) or C(icmpv6),
C(esp), C(ah), C(sctp) or the special keyword C(all), or it can be a numeric value,
representing one of these protocols or a different one.
- A protocol name from I(/etc/protocols) is also allowed.
- A C(!) argument before the protocol inverts the test.
- The number zero is equivalent to all.
- C(all) will match with all protocols and is taken as default when this option is omitted.
type: str
source:
description:
- Source specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
destination:
description:
- Destination specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
tcp_flags:
description:
- TCP flags specification.
- C(tcp_flags) expects a dict with the two keys C(flags) and C(flags_set).
type: dict
default: {}
version_added: "2.4"
suboptions:
flags:
description:
- List of flags you want to examine.
type: list
elements: str
flags_set:
description:
- Flags to be set.
type: list
elements: str
match:
description:
- Specifies a match to use, that is, an extension module that tests for
a specific property.
- The set of matches make up the condition under which a target is invoked.
- Matches are evaluated first to last if specified as an array and work in short-circuit
fashion, i.e. if one extension yields false, evaluation will stop.
type: list
elements: str
default: []
jump:
description:
- This specifies the target of the rule; i.e., what to do if the packet matches it.
- The target can be a user-defined chain (other than the one
this rule is in), one of the special builtin targets which decide the
fate of the packet immediately, or an extension (see EXTENSIONS
below).
- If this option is omitted in a rule (and the goto parameter
is not used), then matching the rule will have no effect on the
packet's fate, but the counters on the rule will be incremented.
type: str
gateway:
description:
- This specifies the IP address of host to send the cloned packets.
- This option is only valid when C(jump) is set to C(TEE).
type: str
version_added: "2.8"
log_prefix:
description:
- Specifies a log text for the rule. Only make sense with a LOG jump.
type: str
version_added: "2.5"
log_level:
description:
- Logging level according to the syslogd-defined priorities.
- The value can be strings or numbers from 1-8.
- This parameter is only applicable if C(jump) is set to C(LOG).
type: str
version_added: "2.8"
choices: [ '0', '1', '2', '3', '4', '5', '6', '7', 'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug' ]
goto:
description:
- This specifies that the processing should continue in a user specified chain.
- Unlike the jump argument return will not continue processing in
this chain but instead in the chain that called us via jump.
type: str
in_interface:
description:
- Name of an interface via which a packet was received (only for packets
entering the C(INPUT), C(FORWARD) and C(PREROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins with
this name will match.
- If this option is omitted, any interface name will match.
type: str
out_interface:
description:
- Name of an interface via which a packet is going to be sent (for
packets entering the C(FORWARD), C(OUTPUT) and C(POSTROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins
with this name will match.
- If this option is omitted, any interface name will match.
type: str
fragment:
description:
- This means that the rule only refers to second and further fragments
of fragmented packets.
- Since there is no way to tell the source or destination ports of such
a packet (or ICMP type), such a packet will not match any rules which specify them.
- When the "!" argument precedes fragment argument, the rule will only match head fragments,
or unfragmented packets.
type: str
set_counters:
description:
- This enables the administrator to initialize the packet and byte
counters of a rule (during C(INSERT), C(APPEND), C(REPLACE) operations).
type: str
source_port:
description:
- Source port or port range specification.
- This can either be a service name or a port number.
- An inclusive range can also be specified, using the format C(first:last).
- If the first port is omitted, C(0) is assumed; if the last is omitted, C(65535) is assumed.
- If the first port is greater than the second one they will be swapped.
type: str
destination_port:
description:
- "Destination port or port range specification. This can either be
a service name or a port number. An inclusive range can also be
specified, using the format first:last. If the first port is omitted,
'0' is assumed; if the last is omitted, '65535' is assumed. If the
first port is greater than the second one they will be swapped.
This is only valid if the rule also specifies one of the following
protocols: tcp, udp, dccp or sctp."
type: str
to_ports:
description:
- This specifies a destination port or range of ports to use, without
this, the destination port is never altered.
- This is only valid if the rule also specifies one of the protocol
C(tcp), C(udp), C(dccp) or C(sctp).
type: str
to_destination:
description:
- This specifies a destination address to use with C(DNAT).
- Without this, the destination address is never altered.
type: str
version_added: "2.1"
to_source:
description:
- This specifies a source address to use with C(SNAT).
- Without this, the source address is never altered.
type: str
version_added: "2.2"
syn:
description:
- This allows matching packets that have the SYN bit set and the ACK
and RST bits unset.
- When negated, this matches all packets with the RST or the ACK bits set.
type: str
choices: [ ignore, match, negate ]
default: ignore
version_added: "2.5"
set_dscp_mark:
description:
- This allows specifying a DSCP mark to be added to packets.
It takes either an integer or hex value.
- Mutually exclusive with C(set_dscp_mark_class).
type: str
version_added: "2.1"
set_dscp_mark_class:
description:
- This allows specifying a predefined DiffServ class which will be
translated to the corresponding DSCP mark.
- Mutually exclusive with C(set_dscp_mark).
type: str
version_added: "2.1"
comment:
description:
- This specifies a comment that will be added to the rule.
type: str
ctstate:
description:
- C(ctstate) is a list of the connection states to match in the conntrack module.
- Possible states are C(INVALID), C(NEW), C(ESTABLISHED), C(RELATED), C(UNTRACKED), C(SNAT), C(DNAT)
type: list
elements: str
default: []
src_range:
description:
- Specifies the source IP range to match in the iprange module.
type: str
version_added: "2.8"
dst_range:
description:
- Specifies the destination IP range to match in the iprange module.
type: str
version_added: "2.8"
limit:
description:
- Specifies the maximum average number of matches to allow per second.
- The number can specify units explicitly, using `/second', `/minute',
`/hour' or `/day', or parts of them (so `5/second' is the same as
`5/s').
type: str
limit_burst:
description:
- Specifies the maximum burst before the above limit kicks in.
type: str
version_added: "2.1"
uid_owner:
description:
- Specifies the UID or username to use in match by owner rule.
- From Ansible 2.6 when the C(!) argument is prepended then the it inverts
the rule to apply instead to all users except that one specified.
type: str
version_added: "2.1"
gid_owner:
description:
- Specifies the GID or group to use in match by owner rule.
type: str
version_added: "2.9"
reject_with:
description:
- 'Specifies the error packet type to return while rejecting. It implies
"jump: REJECT"'
type: str
version_added: "2.1"
icmp_type:
description:
- This allows specification of the ICMP type, which can be a numeric
ICMP type, type/code pair, or one of the ICMP type names shown by the
command 'iptables -p icmp -h'
type: str
version_added: "2.2"
flush:
description:
- Flushes the specified table and chain of all rules.
- If no chain is specified then the entire table is purged.
- Ignores all other parameters.
type: bool
version_added: "2.2"
policy:
description:
- Set the policy for the chain to the given target.
- Only built-in chains can have policies.
- This parameter requires the C(chain) parameter.
- Ignores all other parameters.
type: str
choices: [ ACCEPT, DROP, QUEUE, RETURN ]
version_added: "2.2"
wait:
description:
- Wait N seconds for the xtables lock to prevent multiple instances of
the program from running concurrently.
type: str
version_added: "2.10"
'''
EXAMPLES = r'''
- name: Block specific IP
iptables:
chain: INPUT
source: 8.8.8.8
jump: DROP
become: yes
- name: Forward port 80 to 8600
iptables:
table: nat
chain: PREROUTING
in_interface: eth0
protocol: tcp
match: tcp
destination_port: 80
jump: REDIRECT
to_ports: 8600
comment: Redirect web traffic to port 8600
become: yes
- name: Allow related and established connections
iptables:
chain: INPUT
ctstate: ESTABLISHED,RELATED
jump: ACCEPT
become: yes
- name: Allow new incoming SYN packets on TCP port 22 (SSH)
iptables:
chain: INPUT
protocol: tcp
destination_port: 22
ctstate: NEW
syn: match
jump: ACCEPT
comment: Accept new SSH connections.
- name: Match on IP ranges
iptables:
chain: FORWARD
src_range: 192.168.1.100-192.168.1.199
dst_range: 10.0.0.1-10.0.0.50
jump: ACCEPT
- name: Tag all outbound tcp packets with DSCP mark 8
iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark: 8
protocol: tcp
- name: Tag all outbound tcp packets with DSCP DiffServ class CS1
iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark_class: CS1
protocol: tcp
- name: Insert a rule on line 5
iptables:
chain: INPUT
protocol: tcp
destination_port: 8080
jump: ACCEPT
action: insert
rule_num: 5
- name: Set the policy for the INPUT chain to DROP
iptables:
chain: INPUT
policy: DROP
- name: Reject tcp with tcp-reset
iptables:
chain: INPUT
protocol: tcp
reject_with: tcp-reset
ip_version: ipv4
- name: Set tcp flags
iptables:
chain: OUTPUT
jump: DROP
protocol: tcp
tcp_flags:
flags: ALL
flags_set:
- ACK
- RST
- SYN
- FIN
- name: Iptables flush filter
iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: Iptables flush nat
iptables:
table: nat
chain: '{{ item }}'
flush: yes
with_items: [ 'INPUT', 'OUTPUT', 'PREROUTING', 'POSTROUTING' ]
- name: Log packets arriving into an user-defined chain
iptables:
chain: LOGGING
action: append
state: present
limit: 2/second
limit_burst: 20
log_prefix: "IPTABLES:INFO: "
log_level: info
'''
import re
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
IPTABLES_WAIT_SUPPORT_ADDED = '1.4.20'
IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED = '1.6.0'
BINS = dict(
ipv4='iptables',
ipv6='ip6tables',
)
ICMP_TYPE_OPTIONS = dict(
ipv4='--icmp-type',
ipv6='--icmpv6-type',
)
def append_param(rule, param, flag, is_list):
if is_list:
for item in param:
append_param(rule, item, flag, False)
else:
if param is not None:
if param[0] == '!':
rule.extend(['!', flag, param[1:]])
else:
rule.extend([flag, param])
def append_tcp_flags(rule, param, flag):
if param:
if 'flags' in param and 'flags_set' in param:
rule.extend([flag, ','.join(param['flags']), ','.join(param['flags_set'])])
def append_match_flag(rule, param, flag, negatable):
if param == 'match':
rule.extend([flag])
elif negatable and param == 'negate':
rule.extend(['!', flag])
def append_csv(rule, param, flag):
if param:
rule.extend([flag, ','.join(param)])
def append_match(rule, param, match):
if param:
rule.extend(['-m', match])
def append_jump(rule, param, jump):
if param:
rule.extend(['-j', jump])
def append_wait(rule, param, flag):
if param:
rule.extend([flag, param])
def construct_rule(params):
rule = []
append_wait(rule, params['wait'], '-w')
append_param(rule, params['protocol'], '-p', False)
append_param(rule, params['source'], '-s', False)
append_param(rule, params['destination'], '-d', False)
append_param(rule, params['match'], '-m', True)
append_tcp_flags(rule, params['tcp_flags'], '--tcp-flags')
append_param(rule, params['jump'], '-j', False)
if params.get('jump') and params['jump'].lower() == 'tee':
append_param(rule, params['gateway'], '--gateway', False)
append_param(rule, params['log_prefix'], '--log-prefix', False)
append_param(rule, params['log_level'], '--log-level', False)
append_param(rule, params['to_destination'], '--to-destination', False)
append_param(rule, params['to_source'], '--to-source', False)
append_param(rule, params['goto'], '-g', False)
append_param(rule, params['in_interface'], '-i', False)
append_param(rule, params['out_interface'], '-o', False)
append_param(rule, params['fragment'], '-f', False)
append_param(rule, params['set_counters'], '-c', False)
append_param(rule, params['source_port'], '--source-port', False)
append_param(rule, params['destination_port'], '--destination-port', False)
append_param(rule, params['to_ports'], '--to-ports', False)
append_param(rule, params['set_dscp_mark'], '--set-dscp', False)
append_param(
rule,
params['set_dscp_mark_class'],
'--set-dscp-class',
False)
append_match_flag(rule, params['syn'], '--syn', True)
append_match(rule, params['comment'], 'comment')
append_param(rule, params['comment'], '--comment', False)
if 'conntrack' in params['match']:
append_csv(rule, params['ctstate'], '--ctstate')
elif 'state' in params['match']:
append_csv(rule, params['ctstate'], '--state')
elif params['ctstate']:
append_match(rule, params['ctstate'], 'conntrack')
append_csv(rule, params['ctstate'], '--ctstate')
if 'iprange' in params['match']:
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
elif params['src_range'] or params['dst_range']:
append_match(rule, params['src_range'] or params['dst_range'], 'iprange')
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
append_match(rule, params['limit'] or params['limit_burst'], 'limit')
append_param(rule, params['limit'], '--limit', False)
append_param(rule, params['limit_burst'], '--limit-burst', False)
append_match(rule, params['uid_owner'], 'owner')
append_match_flag(rule, params['uid_owner'], '--uid-owner', True)
append_param(rule, params['uid_owner'], '--uid-owner', False)
append_match(rule, params['gid_owner'], 'owner')
append_match_flag(rule, params['gid_owner'], '--gid-owner', True)
append_param(rule, params['gid_owner'], '--gid-owner', False)
if params['jump'] is None:
append_jump(rule, params['reject_with'], 'REJECT')
append_param(rule, params['reject_with'], '--reject-with', False)
append_param(
rule,
params['icmp_type'],
ICMP_TYPE_OPTIONS[params['ip_version']],
False)
return rule
def push_arguments(iptables_path, action, params, make_rule=True):
cmd = [iptables_path]
cmd.extend(['-t', params['table']])
cmd.extend([action, params['chain']])
if action == '-I' and params['rule_num']:
cmd.extend([params['rule_num']])
if make_rule:
cmd.extend(construct_rule(params))
return cmd
def check_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-C', params)
rc, _, __ = module.run_command(cmd, check_rc=False)
return (rc == 0)
def append_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-A', params)
module.run_command(cmd, check_rc=True)
def insert_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-I', params)
module.run_command(cmd, check_rc=True)
def remove_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-D', params)
module.run_command(cmd, check_rc=True)
def flush_table(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-F', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def set_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-P', params, make_rule=False)
cmd.append(params['policy'])
module.run_command(cmd, check_rc=True)
def get_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params)
rc, out, _ = module.run_command(cmd, check_rc=True)
chain_header = out.split("\n")[0]
result = re.search(r'\(policy ([A-Z]+)\)', chain_header)
if result:
return result.group(1)
return None
def get_iptables_version(iptables_path, module):
cmd = [iptables_path, '--version']
rc, out, _ = module.run_command(cmd, check_rc=True)
return out.split('v')[1].rstrip('\n')
def main():
module = AnsibleModule(
supports_check_mode=True,
argument_spec=dict(
table=dict(type='str', default='filter', choices=['filter', 'nat', 'mangle', 'raw', 'security']),
state=dict(type='str', default='present', choices=['absent', 'present']),
action=dict(type='str', default='append', choices=['append', 'insert']),
ip_version=dict(type='str', default='ipv4', choices=['ipv4', 'ipv6']),
chain=dict(type='str'),
rule_num=dict(type='str'),
protocol=dict(type='str'),
wait=dict(type='str'),
source=dict(type='str'),
to_source=dict(type='str'),
destination=dict(type='str'),
to_destination=dict(type='str'),
match=dict(type='list', elements='str', default=[]),
tcp_flags=dict(type='dict',
options=dict(
flags=dict(type='list', elements='str'),
flags_set=dict(type='list', elements='str'))
),
jump=dict(type='str'),
gateway=dict(type='str'),
log_prefix=dict(type='str'),
log_level=dict(type='str',
choices=['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error',
'warning', 'notice', 'info', 'debug'],
default=None,
),
goto=dict(type='str'),
in_interface=dict(type='str'),
out_interface=dict(type='str'),
fragment=dict(type='str'),
set_counters=dict(type='str'),
source_port=dict(type='str'),
destination_port=dict(type='str'),
to_ports=dict(type='str'),
set_dscp_mark=dict(type='str'),
set_dscp_mark_class=dict(type='str'),
comment=dict(type='str'),
ctstate=dict(type='list', elements='str', default=[]),
src_range=dict(type='str'),
dst_range=dict(type='str'),
limit=dict(type='str'),
limit_burst=dict(type='str'),
uid_owner=dict(type='str'),
gid_owner=dict(type='str'),
reject_with=dict(type='str'),
icmp_type=dict(type='str'),
syn=dict(type='str', default='ignore', choices=['ignore', 'match', 'negate']),
flush=dict(type='bool', default=False),
policy=dict(type='str', choices=['ACCEPT', 'DROP', 'QUEUE', 'RETURN']),
),
mutually_exclusive=(
['set_dscp_mark', 'set_dscp_mark_class'],
['flush', 'policy'],
),
required_if=[
['jump', 'TEE', ['gateway']],
['jump', 'tee', ['gateway']],
]
)
args = dict(
changed=False,
failed=False,
ip_version=module.params['ip_version'],
table=module.params['table'],
chain=module.params['chain'],
flush=module.params['flush'],
rule=' '.join(construct_rule(module.params)),
state=module.params['state'],
)
ip_version = module.params['ip_version']
iptables_path = module.get_bin_path(BINS[ip_version], True)
# Check if chain option is required
if args['flush'] is False and args['chain'] is None:
module.fail_json(msg="Either chain or flush parameter must be specified.")
if module.params.get('log_prefix', None) or module.params.get('log_level', None):
if module.params['jump'] is None:
module.params['jump'] = 'LOG'
elif module.params['jump'] != 'LOG':
module.fail_json(msg="Logging options can only be used with the LOG jump target.")
# Check if wait option is supported
iptables_version = LooseVersion(get_iptables_version(iptables_path, module))
if iptables_version >= LooseVersion(IPTABLES_WAIT_SUPPORT_ADDED):
if iptables_version < LooseVersion(IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED):
module.params['wait'] = ''
else:
module.params['wait'] = None
# Flush the table
if args['flush'] is True:
args['changed'] = True
if not module.check_mode:
flush_table(iptables_path, module, module.params)
# Set the policy
elif module.params['policy']:
current_policy = get_chain_policy(iptables_path, module, module.params)
if not current_policy:
module.fail_json(msg='Can\'t detect current policy')
changed = current_policy != module.params['policy']
args['changed'] = changed
if changed and not module.check_mode:
set_chain_policy(iptables_path, module, module.params)
else:
insert = (module.params['action'] == 'insert')
rule_is_present = check_present(iptables_path, module, module.params)
should_be_present = (args['state'] == 'present')
# Check if target is up to date
args['changed'] = (rule_is_present != should_be_present)
if args['changed'] is False:
# Target is already up to date
module.exit_json(**args)
# Check only; don't modify
if not module.check_mode:
if should_be_present:
if insert:
insert_rule(iptables_path, module, module.params)
else:
append_rule(iptables_path, module, module.params)
else:
remove_rule(iptables_path, module, module.params)
module.exit_json(**args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,444 |
iptables comment parameter incorrectly positioned in iptables
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using **comment** parameter with iptables module, it is not positioned as the right-most object in the actual iptables table.
The outcome looks very bad at the moment.
##### ISSUE TYPE
- Cosmetic Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
**iptables** module, **comment** parameter
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.9
config file = /home/me/project/ansible.cfg
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/me/vConstruction/lib/python3.6/site-packages/ansible
executable location = /home/me/vConstruction/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/home/me/project/ansible.cfg) = ['profile_tasks']
DEFAULT_HOST_LIST(/home/me/project/ansible.cfg) = ['/home/me/project/inventory']
DEPRECATION_WARNINGS(/home/me/project/ansible.cfg) = False
HOST_KEY_CHECKING(/home/me/project/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Host where I use Ansible: Ubuntu 18.04 via WSL 2.
Target VM where the iptables rules are being created: Ubuntu 18.04.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
It can be reproduced by using the following task and running `iptables -L` on the target machine.
I added the DNS task as well, because this one looks correct, but I'm sure that's just because I don't have **ctstate** specified.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create HTTPS rule
iptables:
chain: INPUT
in_interface: eth0
protocol: tcp
destination_port: https
ctstate: NEW
jump: ACCEPT
comment: Accept new inbound HTTPS connections to eth0 interface
- name: Create DNS rule
iptables:
chain: INPUT
protocol: udp
destination_port: domain
jump: ACCEPT
comment: Accept inbound DNS connections
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```bash
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW /* Accept new inbound HTTPS connections to eth0 interface */
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```bash
ACCEPT tcp -- anywhere anywhere tcp dpt:https /* Accept new inbound HTTPS connections to eth0 interface */ ctstate NEW
ACCEPT udp -- anywhere anywhere udp dpt:domain /* Accept inbound DNS connections */
```
|
https://github.com/ansible/ansible/issues/71444
|
https://github.com/ansible/ansible/pull/71496
|
11b7091c84ed6cf9576f319118b88f2a81894764
|
c1da427a5ec678f052fd2cd4885840c4d761946a
| 2020-08-25T16:48:26Z |
python
| 2020-11-09T18:40:55Z |
test/units/modules/test_iptables.py
|
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import patch
from ansible.module_utils import basic
from ansible.modules import iptables
from units.modules.utils import AnsibleExitJson, AnsibleFailJson, ModuleTestCase, set_module_args
def get_bin_path(*args, **kwargs):
return "/sbin/iptables"
def get_iptables_version(iptables_path, module):
return "1.8.2"
class TestIptables(ModuleTestCase):
def setUp(self):
super(TestIptables, self).setUp()
self.mock_get_bin_path = patch.object(basic.AnsibleModule, 'get_bin_path', get_bin_path)
self.mock_get_bin_path.start()
self.addCleanup(self.mock_get_bin_path.stop) # ensure that the patching is 'undone'
self.mock_get_iptables_version = patch.object(iptables, 'get_iptables_version', get_iptables_version)
self.mock_get_iptables_version.start()
self.addCleanup(self.mock_get_iptables_version.stop) # ensure that the patching is 'undone'
def test_without_required_parameters(self):
"""Failure must occurs when all parameters are missing"""
with self.assertRaises(AnsibleFailJson):
set_module_args({})
iptables.main()
def test_flush_table_without_chain(self):
"""Test flush without chain, flush the table"""
set_module_args({
'flush': True,
})
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.return_value = 0, '', '' # successful execution, no output
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args[0][0][0], '/sbin/iptables')
self.assertEqual(run_command.call_args[0][0][1], '-t')
self.assertEqual(run_command.call_args[0][0][2], 'filter')
self.assertEqual(run_command.call_args[0][0][3], '-F')
def test_flush_table_check_true(self):
"""Test flush without parameters and check == true"""
set_module_args({
'flush': True,
'_ansible_check_mode': True,
})
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.return_value = 0, '', '' # successful execution, no output
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 0)
# TODO ADD test flush table nat
# TODO ADD test flush with chain
# TODO ADD test flush with chain and table nat
def test_policy_table(self):
"""Test change policy of a chain"""
set_module_args({
'policy': 'ACCEPT',
'chain': 'INPUT',
})
commands_results = [
(0, 'Chain INPUT (policy DROP)\n', ''),
(0, '', '')
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 2)
# import pdb
# pdb.set_trace()
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-L',
'INPUT',
])
self.assertEqual(run_command.call_args_list[1][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-P',
'INPUT',
'ACCEPT',
])
def test_policy_table_no_change(self):
"""Test don't change policy of a chain if the policy is right"""
set_module_args({
'policy': 'ACCEPT',
'chain': 'INPUT',
})
commands_results = [
(0, 'Chain INPUT (policy ACCEPT)\n', ''),
(0, '', '')
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertFalse(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
# import pdb
# pdb.set_trace()
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-L',
'INPUT',
])
def test_policy_table_changed_false(self):
"""Test flush without parameters and change == false"""
set_module_args({
'policy': 'ACCEPT',
'chain': 'INPUT',
'_ansible_check_mode': True,
})
commands_results = [
(0, 'Chain INPUT (policy DROP)\n', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
# import pdb
# pdb.set_trace()
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-L',
'INPUT',
])
# TODO ADD test policy without chain fail
# TODO ADD test policy with chain don't exists
# TODO ADD test policy with wrong choice fail
def test_insert_rule_change_false(self):
"""Test flush without parameters"""
set_module_args({
'chain': 'OUTPUT',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'ACCEPT',
'action': 'insert',
'_ansible_check_mode': True,
})
commands_results = [
(1, '', ''),
(0, '', '')
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
# import pdb
# pdb.set_trace()
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'OUTPUT',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'ACCEPT'
])
def test_insert_rule(self):
"""Test flush without parameters"""
set_module_args({
'chain': 'OUTPUT',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'ACCEPT',
'action': 'insert'
})
commands_results = [
(1, '', ''),
(0, '', '')
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 2)
# import pdb
# pdb.set_trace()
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'OUTPUT',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'ACCEPT'
])
self.assertEqual(run_command.call_args_list[1][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-I',
'OUTPUT',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'ACCEPT'
])
def test_append_rule_check_mode(self):
"""Test append a redirection rule in check mode"""
set_module_args({
'chain': 'PREROUTING',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'REDIRECT',
'table': 'nat',
'to_destination': '5.5.5.5/32',
'protocol': 'udp',
'destination_port': '22',
'to_ports': '8600',
'_ansible_check_mode': True,
})
commands_results = [
(1, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'nat',
'-C',
'PREROUTING',
'-p',
'udp',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'REDIRECT',
'--to-destination',
'5.5.5.5/32',
'--destination-port',
'22',
'--to-ports',
'8600'
])
def test_append_rule(self):
"""Test append a redirection rule"""
set_module_args({
'chain': 'PREROUTING',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'REDIRECT',
'table': 'nat',
'to_destination': '5.5.5.5/32',
'protocol': 'udp',
'destination_port': '22',
'to_ports': '8600'
})
commands_results = [
(1, '', ''),
(0, '', '')
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 2)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'nat',
'-C',
'PREROUTING',
'-p',
'udp',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'REDIRECT',
'--to-destination',
'5.5.5.5/32',
'--destination-port',
'22',
'--to-ports',
'8600'
])
self.assertEqual(run_command.call_args_list[1][0][0], [
'/sbin/iptables',
'-t',
'nat',
'-A',
'PREROUTING',
'-p',
'udp',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'REDIRECT',
'--to-destination',
'5.5.5.5/32',
'--destination-port',
'22',
'--to-ports',
'8600'
])
def test_remove_rule(self):
"""Test flush without parameters"""
set_module_args({
'chain': 'PREROUTING',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'SNAT',
'table': 'nat',
'to_source': '5.5.5.5/32',
'protocol': 'udp',
'source_port': '22',
'to_ports': '8600',
'state': 'absent',
'in_interface': 'eth0',
'out_interface': 'eth1',
'comment': 'this is a comment'
})
commands_results = [
(0, '', ''),
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 2)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'nat',
'-C',
'PREROUTING',
'-p',
'udp',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'SNAT',
'--to-source',
'5.5.5.5/32',
'-i',
'eth0',
'-o',
'eth1',
'--source-port',
'22',
'--to-ports',
'8600',
'-m',
'comment',
'--comment',
'this is a comment'
])
self.assertEqual(run_command.call_args_list[1][0][0], [
'/sbin/iptables',
'-t',
'nat',
'-D',
'PREROUTING',
'-p',
'udp',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'SNAT',
'--to-source',
'5.5.5.5/32',
'-i',
'eth0',
'-o',
'eth1',
'--source-port',
'22',
'--to-ports',
'8600',
'-m',
'comment',
'--comment',
'this is a comment'
])
def test_remove_rule_check_mode(self):
"""Test flush without parameters check mode"""
set_module_args({
'chain': 'PREROUTING',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'SNAT',
'table': 'nat',
'to_source': '5.5.5.5/32',
'protocol': 'udp',
'source_port': '22',
'to_ports': '8600',
'state': 'absent',
'in_interface': 'eth0',
'out_interface': 'eth1',
'comment': 'this is a comment',
'_ansible_check_mode': True,
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'nat',
'-C',
'PREROUTING',
'-p',
'udp',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'SNAT',
'--to-source',
'5.5.5.5/32',
'-i',
'eth0',
'-o',
'eth1',
'--source-port',
'22',
'--to-ports',
'8600',
'-m',
'comment',
'--comment',
'this is a comment'
])
def test_insert_with_reject(self):
""" Using reject_with with a previously defined jump: REJECT results in two Jump statements #18988 """
set_module_args({
'chain': 'INPUT',
'protocol': 'tcp',
'reject_with': 'tcp-reset',
'ip_version': 'ipv4',
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'INPUT',
'-p',
'tcp',
'-j',
'REJECT',
'--reject-with',
'tcp-reset',
])
def test_insert_jump_reject_with_reject(self):
""" Using reject_with with a previously defined jump: REJECT results in two Jump statements #18988 """
set_module_args({
'chain': 'INPUT',
'protocol': 'tcp',
'jump': 'REJECT',
'reject_with': 'tcp-reset',
'ip_version': 'ipv4',
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'INPUT',
'-p',
'tcp',
'-j',
'REJECT',
'--reject-with',
'tcp-reset',
])
def test_jump_tee_gateway_negative(self):
""" Missing gateway when JUMP is set to TEE """
set_module_args({
'table': 'mangle',
'chain': 'PREROUTING',
'in_interface': 'eth0',
'protocol': 'udp',
'match': 'state',
'jump': 'TEE',
'ctstate': ['NEW'],
'destination_port': '9521',
'destination': '127.0.0.1'
})
with self.assertRaises(AnsibleFailJson) as e:
iptables.main()
self.assertTrue(e.exception.args[0]['failed'])
self.assertEqual(e.exception.args[0]['msg'], 'jump is TEE but all of the following are missing: gateway')
def test_jump_tee_gateway(self):
""" Using gateway when JUMP is set to TEE """
set_module_args({
'table': 'mangle',
'chain': 'PREROUTING',
'in_interface': 'eth0',
'protocol': 'udp',
'match': 'state',
'jump': 'TEE',
'ctstate': ['NEW'],
'destination_port': '9521',
'gateway': '192.168.10.1',
'destination': '127.0.0.1'
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t', 'mangle',
'-C', 'PREROUTING',
'-p', 'udp',
'-d', '127.0.0.1',
'-m', 'state',
'-j', 'TEE',
'--gateway', '192.168.10.1',
'-i', 'eth0',
'--destination-port', '9521',
'--state', 'NEW'
])
def test_tcp_flags(self):
""" Test various ways of inputting tcp_flags """
args = [
{
'chain': 'OUTPUT',
'protocol': 'tcp',
'jump': 'DROP',
'tcp_flags': 'flags=ALL flags_set="ACK,RST,SYN,FIN"'
},
{
'chain': 'OUTPUT',
'protocol': 'tcp',
'jump': 'DROP',
'tcp_flags': {
'flags': 'ALL',
'flags_set': 'ACK,RST,SYN,FIN'
}
},
{
'chain': 'OUTPUT',
'protocol': 'tcp',
'jump': 'DROP',
'tcp_flags': {
'flags': ['ALL'],
'flags_set': ['ACK', 'RST', 'SYN', 'FIN']
}
},
]
for item in args:
set_module_args(item)
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'OUTPUT',
'-p',
'tcp',
'--tcp-flags',
'ALL',
'ACK,RST,SYN,FIN',
'-j',
'DROP'
])
def test_log_level(self):
""" Test various ways of log level flag """
log_levels = ['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug']
for log_lvl in log_levels:
set_module_args({
'chain': 'INPUT',
'jump': 'LOG',
'log_level': log_lvl,
'source': '1.2.3.4/32',
'log_prefix': '** DROP-this_ip **'
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t', 'filter',
'-C', 'INPUT',
'-s', '1.2.3.4/32',
'-j', 'LOG',
'--log-prefix', '** DROP-this_ip **',
'--log-level', log_lvl
])
def test_iprange(self):
""" Test iprange module with its flags src_range and dst_range """
set_module_args({
'chain': 'INPUT',
'match': ['iprange'],
'src_range': '192.168.1.100-192.168.1.199',
'jump': 'ACCEPT'
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'INPUT',
'-m',
'iprange',
'-j',
'ACCEPT',
'--src-range',
'192.168.1.100-192.168.1.199',
])
set_module_args({
'chain': 'INPUT',
'src_range': '192.168.1.100-192.168.1.199',
'dst_range': '10.0.0.50-10.0.0.100',
'jump': 'ACCEPT'
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'INPUT',
'-j',
'ACCEPT',
'-m',
'iprange',
'--src-range',
'192.168.1.100-192.168.1.199',
'--dst-range',
'10.0.0.50-10.0.0.100'
])
set_module_args({
'chain': 'INPUT',
'dst_range': '10.0.0.50-10.0.0.100',
'jump': 'ACCEPT'
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'INPUT',
'-j',
'ACCEPT',
'-m',
'iprange',
'--dst-range',
'10.0.0.50-10.0.0.100'
])
def test_insert_rule_with_wait(self):
"""Test flush without parameters"""
set_module_args({
'chain': 'OUTPUT',
'source': '1.2.3.4/32',
'destination': '7.8.9.10/42',
'jump': 'ACCEPT',
'action': 'insert',
'wait': '10'
})
commands_results = [
(0, '', ''),
]
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.side_effect = commands_results
with self.assertRaises(AnsibleExitJson) as result:
iptables.main()
self.assertTrue(result.exception.args[0]['changed'])
self.assertEqual(run_command.call_count, 1)
self.assertEqual(run_command.call_args_list[0][0][0], [
'/sbin/iptables',
'-t',
'filter',
'-C',
'OUTPUT',
'-w',
'10',
'-s',
'1.2.3.4/32',
'-d',
'7.8.9.10/42',
'-j',
'ACCEPT'
])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,541 |
delegate_to: localhost crashes when remote_user is root
|
##### SUMMARY
See #72412.
Fixed by @bcoca's patch https://paste.debian.net/1170589/
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core
##### ANSIBLE VERSION
```paste below
devel
```
##### STEPS TO REPRODUCE
```yaml
- hosts: remote_server
tasks:
- copy:
src: t.yml
dest: /tmp/t.yml
delegate_to: localhost
```
Connect to remote server with `root`.
##### EXPECTED RESULTS
Works.
##### ACTUAL RESULTS
```paste below
fatal: [remote-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" && echo ansible-tmp-1604856819.0247443-194052-180695334418942=\"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" ), exited with result 1", "unreachable": true}
```
|
https://github.com/ansible/ansible/issues/72541
|
https://github.com/ansible/ansible/pull/72543
|
de5858f48dc9e1ce9117034e0d7e76806f420ca8
|
aa4d53ccdfe9fb8f5a97e058703286adfbc91d08
| 2020-11-09T15:12:51Z |
python
| 2020-11-09T21:21:17Z |
changelogs/fragments/ensure_local_user_correctness.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,541 |
delegate_to: localhost crashes when remote_user is root
|
##### SUMMARY
See #72412.
Fixed by @bcoca's patch https://paste.debian.net/1170589/
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core
##### ANSIBLE VERSION
```paste below
devel
```
##### STEPS TO REPRODUCE
```yaml
- hosts: remote_server
tasks:
- copy:
src: t.yml
dest: /tmp/t.yml
delegate_to: localhost
```
Connect to remote server with `root`.
##### EXPECTED RESULTS
Works.
##### ACTUAL RESULTS
```paste below
fatal: [remote-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" && echo ansible-tmp-1604856819.0247443-194052-180695334418942=\"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" ), exited with result 1", "unreachable": true}
```
|
https://github.com/ansible/ansible/issues/72541
|
https://github.com/ansible/ansible/pull/72543
|
de5858f48dc9e1ce9117034e0d7e76806f420ca8
|
aa4d53ccdfe9fb8f5a97e058703286adfbc91d08
| 2020-11-09T15:12:51Z |
python
| 2020-11-09T21:21:17Z |
lib/ansible/plugins/connection/local.py
|
# (c) 2012, Michael DeHaan <[email protected]>
# (c) 2015, 2017 Toshio Kuratomi <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: local
short_description: execute on controller
description:
- This connection plugin allows ansible to execute tasks on the Ansible 'controller' instead of on a remote host.
author: ansible (@core)
version_added: historical
notes:
- The remote user is ignored, the user with which the ansible CLI was executed is used instead.
'''
import os
import shutil
import subprocess
import fcntl
import getpass
import ansible.constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import text_type, binary_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath
display = Display()
class Connection(ConnectionBase):
''' Local based connections '''
transport = 'local'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.cwd = None
def _connect(self):
''' connect to the local host; nothing to do here '''
# Because we haven't made any remote connection we're running as
# the local user, rather than as whatever is configured in remote_user.
self._play_context.remote_user = getpass.getuser()
if not self._connected:
display.vvv(u"ESTABLISH LOCAL CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
self._connected = True
return self
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the local host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.debug("in local.exec_command()")
executable = C.DEFAULT_EXECUTABLE.split()[0] if C.DEFAULT_EXECUTABLE else None
if not os.path.exists(to_bytes(executable, errors='surrogate_or_strict')):
raise AnsibleError("failed to find the executable specified %s."
" Please verify if the executable exists and re-try." % executable)
display.vvv(u"EXEC {0}".format(to_text(cmd)), host=self._play_context.remote_addr)
display.debug("opening command with Popen()")
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = map(to_bytes, cmd)
p = subprocess.Popen(
cmd,
shell=isinstance(cmd, (text_type, binary_type)),
executable=executable,
cwd=self.cwd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
display.debug("done running command with Popen()")
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
become_output = b''
try:
while not self.become.check_success(become_output) and not self.become.check_password_prompt(become_output):
events = selector.select(self._play_context.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError('timeout waiting for privilege escalation password prompt:\n' + to_native(become_output))
for key, event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
if not chunk:
stdout, stderr = p.communicate()
raise AnsibleError('privilege output closed while waiting for password prompt:\n' + to_native(become_output))
become_output += chunk
finally:
selector.close()
if not self.become.check_success(become_output):
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
p.stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
display.debug("done with local.exec_command()")
return (p.returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to local '''
super(Connection, self).put_file(in_path, out_path)
in_path = unfrackpath(in_path, basedir=self.cwd)
out_path = unfrackpath(out_path, basedir=self.cwd)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
try:
shutil.copyfile(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except shutil.Error:
raise AnsibleError("failed to copy: {0} and {1} are the same".format(to_native(in_path), to_native(out_path)))
except IOError as e:
raise AnsibleError("failed to transfer file to {0}: {1}".format(to_native(out_path), to_native(e)))
def fetch_file(self, in_path, out_path):
''' fetch a file from local to local -- for compatibility '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
self.put_file(in_path, out_path)
def close(self):
''' terminate the connection; nothing to do here '''
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,541 |
delegate_to: localhost crashes when remote_user is root
|
##### SUMMARY
See #72412.
Fixed by @bcoca's patch https://paste.debian.net/1170589/
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core
##### ANSIBLE VERSION
```paste below
devel
```
##### STEPS TO REPRODUCE
```yaml
- hosts: remote_server
tasks:
- copy:
src: t.yml
dest: /tmp/t.yml
delegate_to: localhost
```
Connect to remote server with `root`.
##### EXPECTED RESULTS
Works.
##### ACTUAL RESULTS
```paste below
fatal: [remote-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" && echo ansible-tmp-1604856819.0247443-194052-180695334418942=\"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" ), exited with result 1", "unreachable": true}
```
|
https://github.com/ansible/ansible/issues/72541
|
https://github.com/ansible/ansible/pull/72543
|
de5858f48dc9e1ce9117034e0d7e76806f420ca8
|
aa4d53ccdfe9fb8f5a97e058703286adfbc91d08
| 2020-11-09T15:12:51Z |
python
| 2020-11-09T21:21:17Z |
test/integration/targets/delegate_to/delegate_local_from_root.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,541 |
delegate_to: localhost crashes when remote_user is root
|
##### SUMMARY
See #72412.
Fixed by @bcoca's patch https://paste.debian.net/1170589/
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core
##### ANSIBLE VERSION
```paste below
devel
```
##### STEPS TO REPRODUCE
```yaml
- hosts: remote_server
tasks:
- copy:
src: t.yml
dest: /tmp/t.yml
delegate_to: localhost
```
Connect to remote server with `root`.
##### EXPECTED RESULTS
Works.
##### ACTUAL RESULTS
```paste below
fatal: [remote-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" && echo ansible-tmp-1604856819.0247443-194052-180695334418942=\"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" ), exited with result 1", "unreachable": true}
```
|
https://github.com/ansible/ansible/issues/72541
|
https://github.com/ansible/ansible/pull/72543
|
de5858f48dc9e1ce9117034e0d7e76806f420ca8
|
aa4d53ccdfe9fb8f5a97e058703286adfbc91d08
| 2020-11-09T15:12:51Z |
python
| 2020-11-09T21:21:17Z |
test/integration/targets/delegate_to/files/testfile
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,541 |
delegate_to: localhost crashes when remote_user is root
|
##### SUMMARY
See #72412.
Fixed by @bcoca's patch https://paste.debian.net/1170589/
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core
##### ANSIBLE VERSION
```paste below
devel
```
##### STEPS TO REPRODUCE
```yaml
- hosts: remote_server
tasks:
- copy:
src: t.yml
dest: /tmp/t.yml
delegate_to: localhost
```
Connect to remote server with `root`.
##### EXPECTED RESULTS
Works.
##### ACTUAL RESULTS
```paste below
fatal: [remote-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" && echo ansible-tmp-1604856819.0247443-194052-180695334418942=\"` echo /root/.ansible/tmp/ansible-tmp-1604856819.0247443-194052-180695334418942 `\" ), exited with result 1", "unreachable": true}
```
|
https://github.com/ansible/ansible/issues/72541
|
https://github.com/ansible/ansible/pull/72543
|
de5858f48dc9e1ce9117034e0d7e76806f420ca8
|
aa4d53ccdfe9fb8f5a97e058703286adfbc91d08
| 2020-11-09T15:12:51Z |
python
| 2020-11-09T21:21:17Z |
test/integration/targets/delegate_to/runme.sh
|
#!/usr/bin/env bash
set -eux
platform="$(uname)"
function setup() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
ifconfig lo0
existing=$(ifconfig lo0 | grep '^[[:blank:]]inet 127\.0\.0\. ' || true)
echo "${existing}"
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 alias "${ip}" up
fi
done
ifconfig lo0
fi
}
function teardown() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 -alias "${ip}"
fi
done
ifconfig lo0
fi
}
setup
trap teardown EXIT
ANSIBLE_SSH_ARGS='-C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null' \
ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_delegate_to.yml -i inventory -v "$@"
# this test is not doing what it says it does, also relies on var that should not be available
#ansible-playbook test_loop_control.yml -v "$@"
ansible-playbook test_delegate_to_loop_randomness.yml -v "$@"
ansible-playbook delegate_and_nolog.yml -i inventory -v "$@"
ansible-playbook delegate_facts_block.yml -i inventory -v "$@"
ansible-playbook test_delegate_to_loop_caching.yml -i inventory -v "$@"
# ensure we are using correct settings when delegating
ANSIBLE_TIMEOUT=3 ansible-playbook delegate_vars_hanldling.yml -i inventory -v "$@"
ansible-playbook has_hostvars.yml -i inventory -v "$@"
# test ansible_x_interpreter
# python
source virtualenv.sh
(
cd "${OUTPUT_DIR}"/venv/bin
ln -s python firstpython
ln -s python secondpython
)
ansible-playbook verify_interpreter.yml -i inventory_interpreters -v "$@"
ansible-playbook discovery_applied.yml -i inventory -v "$@"
ansible-playbook resolve_vars.yml -i inventory -v "$@"
ansible-playbook test_delegate_to_lookup_context.yml -i inventory -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,368 |
Add CI platform: rhel/7.9
|
##### SUMMARY
Replace the `rhel/7.8` platform in the test matrix with `rhel/7.9`.
RHEL 7.9 was released in September, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72368
|
https://github.com/ansible/ansible/pull/72558
|
fa2be89cd44f0c867f24351c3ba73d5e849cb507
|
d451433e5d96c9f2f8cbe30c316c128ff591edf1
| 2020-10-28T00:14:19Z |
python
| 2020-11-10T06:53:22Z |
shippable.yml
|
language: python
env:
matrix:
- T=none
matrix:
exclude:
- env: T=none
include:
- env: T=sanity/1
- env: T=sanity/2
- env: T=sanity/3
- env: T=sanity/4
- env: T=sanity/5
- env: T=units/2.6
- env: T=units/2.7
- env: T=units/3.5
- env: T=units/3.6
- env: T=units/3.7
- env: T=units/3.8
- env: T=units/3.9
- env: T=windows/2012/1
- env: T=windows/2012-R2/1
- env: T=windows/2016/1
- env: T=windows/2019/1
- env: T=macos/10.15/1
- env: T=rhel/7.8/1
- env: T=rhel/8.2/1
- env: T=freebsd/11.1/1
- env: T=freebsd/12.2/1
- env: T=linux/alpine3/1
- env: T=linux/centos6/1
- env: T=linux/centos7/1
- env: T=linux/centos8/1
- env: T=linux/fedora31/1
- env: T=linux/fedora32/1
- env: T=linux/opensuse15py2/1
- env: T=linux/opensuse15/1
- env: T=linux/ubuntu1604/1
- env: T=linux/ubuntu1804/1
- env: T=macos/10.15/2
- env: T=rhel/7.8/2
- env: T=rhel/8.2/2
- env: T=freebsd/11.1/2
- env: T=freebsd/12.2/2
- env: T=linux/alpine3/2
- env: T=linux/centos6/2
- env: T=linux/centos7/2
- env: T=linux/centos8/2
- env: T=linux/fedora31/2
- env: T=linux/fedora32/2
- env: T=linux/opensuse15py2/2
- env: T=linux/opensuse15/2
- env: T=linux/ubuntu1604/2
- env: T=linux/ubuntu1804/2
- env: T=macos/10.15/3
- env: T=rhel/7.8/3
- env: T=rhel/8.2/3
- env: T=freebsd/11.1/3
- env: T=freebsd/12.2/3
- env: T=linux/alpine3/3
- env: T=linux/centos6/3
- env: T=linux/centos7/3
- env: T=linux/centos8/3
- env: T=linux/fedora31/3
- env: T=linux/fedora32/3
- env: T=linux/opensuse15py2/3
- env: T=linux/opensuse15/3
- env: T=linux/ubuntu1604/3
- env: T=linux/ubuntu1804/3
- env: T=macos/10.15/4
- env: T=rhel/7.8/4
- env: T=rhel/8.2/4
- env: T=freebsd/11.1/4
- env: T=freebsd/12.2/4
- env: T=linux/alpine3/4
- env: T=linux/centos6/4
- env: T=linux/centos7/4
- env: T=linux/centos8/4
- env: T=linux/fedora31/4
- env: T=linux/fedora32/4
- env: T=linux/opensuse15py2/4
- env: T=linux/opensuse15/4
- env: T=linux/ubuntu1604/4
- env: T=linux/ubuntu1804/4
- env: T=macos/10.15/5
- env: T=rhel/7.8/5
- env: T=rhel/8.2/5
- env: T=freebsd/11.1/5
- env: T=freebsd/12.2/5
- env: T=linux/alpine3/5
- env: T=linux/centos6/5
- env: T=linux/centos7/5
- env: T=linux/centos8/5
- env: T=linux/fedora31/5
- env: T=linux/fedora32/5
- env: T=linux/opensuse15py2/5
- env: T=linux/opensuse15/5
- env: T=linux/ubuntu1604/5
- env: T=linux/ubuntu1804/5
- env: T=galaxy/2.7/1
- env: T=galaxy/3.6/1
- env: T=generic/2.7/1
- env: T=generic/3.6/1
- env: T=i/osx/10.11
- env: T=i/rhel/7.8
- env: T=i/rhel/8.2
- env: T=i/freebsd/11.1
- env: T=i/freebsd/12.2
- env: T=i/linux/centos6
- env: T=i/linux/centos7
- env: T=i/linux/centos8
- env: T=i/linux/fedora31
- env: T=i/linux/fedora32
- env: T=i/linux/opensuse15py2
- env: T=i/linux/opensuse15
- env: T=i/linux/ubuntu1604
- env: T=i/linux/ubuntu1804
- env: T=i/windows/2012
- env: T=i/windows/2012-R2
- env: T=i/windows/2016
- env: T=i/windows/2019
- env: T=i/ios/csr1000v//1
- env: T=i/vyos/1.1.8/2.7/1
- env: T=i/vyos/1.1.8/3.6/1
- env: T=i/aws/2.7/1
- env: T=i/aws/3.6/1
- env: T=i/cloud//1
branches:
except:
- "*-patch-*"
- "revert-*-*"
build:
pre_ci_boot:
image_name: quay.io/ansible/shippable-build-container
image_tag: 6.10.4.0
pull: true
options: --privileged=true --net=bridge
ci:
- test/utils/shippable/timing.sh test/utils/shippable/shippable.sh $T
integrations:
notifications:
- integrationName: email
type: email
on_success: never
on_failure: never
on_start: never
on_pull_request: never
- integrationName: irc
type: irc
recipients:
- "chat.freenode.net#ansible-notices"
on_success: change
on_failure: always
on_start: never
on_pull_request: always
- integrationName: slack
type: slack
recipients:
- "#shippable"
on_success: change
on_failure: always
on_start: never
on_pull_request: never
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,368 |
Add CI platform: rhel/7.9
|
##### SUMMARY
Replace the `rhel/7.8` platform in the test matrix with `rhel/7.9`.
RHEL 7.9 was released in September, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72368
|
https://github.com/ansible/ansible/pull/72558
|
fa2be89cd44f0c867f24351c3ba73d5e849cb507
|
d451433e5d96c9f2f8cbe30c316c128ff591edf1
| 2020-10-28T00:14:19Z |
python
| 2020-11-10T06:53:22Z |
test/integration/targets/yum/tasks/main.yml
|
# (c) 2014, James Tanner <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Note: We install the yum package onto Fedora so that this will work on dnf systems
# We want to test that for people who don't want to upgrade their systems.
- block:
- import_tasks: yum.yml
always:
- name: remove installed packages
yum:
name:
- bc
- sos
state: absent
- name: remove installed group
yum:
name: "@Custom Group"
state: absent
- name: On Fedora 28 the above won't remove the group which results in a failure in repo.yml below
yum:
name: dinginessentail
state: absent
when:
- ansible_distribution in ['Fedora']
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
- block:
- import_tasks: repo.yml
- import_tasks: yum_group_remove.yml
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux']
always:
- yum_repository:
name: "{{ item }}"
state: absent
loop: "{{ repos }}"
- command: yum clean metadata
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
- import_tasks: yuminstallroot.yml
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
- import_tasks: proxy.yml
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
- import_tasks: check_mode_consistency.yml
when:
- (ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and ansible_distribution_major_version|int == 7)
- import_tasks: lock.yml
when:
- ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,368 |
Add CI platform: rhel/7.9
|
##### SUMMARY
Replace the `rhel/7.8` platform in the test matrix with `rhel/7.9`.
RHEL 7.9 was released in September, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72368
|
https://github.com/ansible/ansible/pull/72558
|
fa2be89cd44f0c867f24351c3ba73d5e849cb507
|
d451433e5d96c9f2f8cbe30c316c128ff591edf1
| 2020-10-28T00:14:19Z |
python
| 2020-11-10T06:53:22Z |
test/integration/targets/yum/tasks/yum.yml
|
# UNINSTALL
- name: uninstall sos
yum: name=sos state=removed
register: yum_result
- name: check sos with rpm
shell: rpm -q sos
ignore_errors: True
register: rpm_result
- name: verify uninstallation of sos
assert:
that:
- "yum_result is success"
- "rpm_result is failed"
# UNINSTALL AGAIN
- name: uninstall sos again in check mode
yum: name=sos state=removed
check_mode: true
register: yum_result
- name: verify no change on re-uninstall in check mode
assert:
that:
- "not yum_result is changed"
- name: uninstall sos again
yum: name=sos state=removed
register: yum_result
- name: verify no change on re-uninstall
assert:
that:
- "not yum_result is changed"
# INSTALL
- name: install sos in check mode
yum: name=sos state=present
check_mode: true
register: yum_result
- name: verify installation of sos in check mode
assert:
that:
- "yum_result is changed"
- name: install sos
yum: name=sos state=present
register: yum_result
- name: verify installation of sos
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: check sos with rpm
shell: rpm -q sos
# INSTALL AGAIN
- name: install sos again in check mode
yum: name=sos state=present
check_mode: true
register: yum_result
- name: verify no change on second install in check mode
assert:
that:
- "not yum_result is changed"
- name: install sos again
yum: name=sos state=present
register: yum_result
- name: verify no change on second install
assert:
that:
- "not yum_result is changed"
- name: install sos again with empty string enablerepo
yum: name=sos state=present enablerepo=""
register: yum_result
- name: verify no change on third install with empty string enablerepo
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
# This test case is unfortunately distro specific because we have to specify
# repo names which are not the same across Fedora/RHEL/CentOS for base/updates
- name: install sos again with missing repo enablerepo
yum:
name: sos
state: present
enablerepo:
- "thisrepodoesnotexist"
- "base"
- "updates"
disablerepo: "*"
register: yum_result
when: ansible_distribution == 'CentOS'
- name: verify no change on fourth install with missing repo enablerepo (yum)
assert:
that:
- "yum_result is success"
- "yum_result is not changed"
when: ansible_distribution == 'CentOS'
# This test case is unfortunately distro specific because we have to specify
# repo names which are not the same across Fedora/RHEL/CentOS for base/updates
- name: install sos again with disable all and enable select repo(s)
yum:
name: sos
state: present
enablerepo:
- "base"
- "updates"
disablerepo: "*"
register: yum_result
when: ansible_distribution == 'CentOS'
- name: verify no change on fourth install with missing repo enablerepo (yum)
assert:
that:
- "yum_result is success"
- "yum_result is not changed"
when: ansible_distribution == 'CentOS'
- name: install sos again with only missing repo enablerepo
yum:
name: sos
state: present
enablerepo: "thisrepodoesnotexist"
ignore_errors: true
register: yum_result
- name: verify no change on fifth install with only missing repo enablerepo (yum)
assert:
that:
- "yum_result is not success"
when: ansible_pkg_mgr == 'yum'
- name: verify no change on fifth install with only missing repo enablerepo (dnf)
assert:
that:
- "yum_result is success"
when: ansible_pkg_mgr == 'dnf'
# INSTALL AGAIN WITH LATEST
- name: install sos again with state latest in check mode
yum: name=sos state=latest
check_mode: true
register: yum_result
- name: verify install sos again with state latest in check mode
assert:
that:
- "not yum_result is changed"
- name: install sos again with state latest idempotence
yum: name=sos state=latest
register: yum_result
- name: verify install sos again with state latest idempotence
assert:
that:
- "not yum_result is changed"
# INSTALL WITH LATEST
- name: uninstall sos
yum: name=sos state=removed
register: yum_result
- name: verify uninstall sos
assert:
that:
- "yum_result is successful"
- name: copy yum.conf file in case it is missing
copy:
src: yum.conf
dest: /etc/yum.conf
force: False
register: yum_conf_copy
- block:
- name: install sos with state latest in check mode with config file param
yum: name=sos state=latest conf_file=/etc/yum.conf
check_mode: true
register: yum_result
- name: verify install sos with state latest in check mode with config file param
assert:
that:
- "yum_result is changed"
always:
- name: remove tmp yum.conf file if we created it
file:
path: /etc/yum.conf
state: absent
when: yum_conf_copy is changed
- name: install sos with state latest in check mode
yum: name=sos state=latest
check_mode: true
register: yum_result
- name: verify install sos with state latest in check mode
assert:
that:
- "yum_result is changed"
- name: install sos with state latest
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest
assert:
that:
- "yum_result is changed"
- name: install sos with state latest idempotence
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest idempotence
assert:
that:
- "not yum_result is changed"
- name: install sos with state latest idempotence with config file param
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest idempotence with config file param
assert:
that:
- "not yum_result is changed"
# Multiple packages
- name: uninstall sos and bc
yum: name=sos,bc state=removed
- name: check sos with rpm
shell: rpm -q sos
ignore_errors: True
register: rpm_sos_result
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: verify packages installed
assert:
that:
- "rpm_sos_result is failed"
- "rpm_bc_result is failed"
- name: install sos and bc as comma separated
yum: name=sos,bc state=present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
register: yum_result
- name: install sos and bc as list
yum:
name:
- sos
- bc
state: present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
register: yum_result
- name: install sos and bc as comma separated with spaces
yum:
name: "sos, bc"
state: present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
- name: install non-existent rpm
yum:
name: does-not-exist
register: non_existent_rpm
ignore_errors: True
- name: check non-existent rpm install failed
assert:
that:
- non_existent_rpm is failed
# Install in installroot='/'
- name: install sos
yum: name=sos state=present installroot='/'
register: yum_result
- name: verify installation of sos
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: check sos with rpm
shell: rpm -q sos --root=/
- name: uninstall sos
yum:
name: sos
installroot: '/'
state: removed
register: yum_result
- name: Test download_only
yum:
name: sos
state: latest
download_only: true
register: yum_result
- name: verify download of sos (part 1 -- yum "install" succeeded)
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: uninstall sos (noop)
yum:
name: sos
state: removed
register: yum_result
- name: verify download of sos (part 2 -- nothing removed during uninstall)
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- name: uninstall sos for downloadonly/downloaddir test
yum:
name: sos
state: absent
- name: Test download_only/download_dir
yum:
name: sos
state: latest
download_only: true
download_dir: "/var/tmp/packages"
register: yum_result
- name: verify yum output
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- command: "ls /var/tmp/packages"
register: ls_out
- name: Verify specified download_dir was used
assert:
that:
- "'sos' in ls_out.stdout"
- name: install group
yum:
name: "@Custom Group"
state: present
register: yum_result
- name: verify installation of the group
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the group again
yum:
name: "@Custom Group"
state: present
register: yum_result
- name: verify nothing changed
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the group again but also with a package that is not yet installed
yum:
name:
- "@Custom Group"
- sos
state: present
register: yum_result
- name: verify sos is installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install the group again, with --check to check 'changed'
yum:
name: "@Custom Group"
state: present
check_mode: yes
register: yum_result
- name: verify nothing changed
assert:
that:
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install non existing group
yum:
name: "@non-existing-group"
state: present
register: yum_result
ignore_errors: True
- name: verify installation of the non existing group failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- "yum_result is failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install non existing file
yum:
name: /tmp/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: yum_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- name: try to install from non existing url
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: yum_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- name: use latest to install httpd
yum:
name: httpd
state: latest
register: yum_result
- name: verify httpd was installed
assert:
that:
- "'changed' in yum_result"
- name: uninstall httpd
yum:
name: httpd
state: removed
- name: update httpd only if it exists
yum:
name: httpd
state: latest
update_only: yes
register: yum_result
- name: verify httpd not installed
assert:
that:
- "not yum_result is changed"
- "'Packages providing httpd not installed due to update_only specified' in yum_result.results"
- name: try to install uncompatible arch rpm on non-ppc64le, should fail
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/banner-1.3.4-3.el7.ppc64le.rpm
state: present
register: yum_result
ignore_errors: True
when:
- ansible_architecture not in ['ppc64le']
- name: verify that yum failed on non-ppc64le
assert:
that:
- "not yum_result is changed"
- "yum_result is failed"
when:
- ansible_architecture not in ['ppc64le']
- name: try to install uncompatible arch rpm on ppc64le, should fail
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/tinyproxy-1.10.0-3.el7.x86_64.rpm
state: present
register: yum_result
ignore_errors: True
when:
- ansible_architecture in ['ppc64le']
- name: verify that yum failed on ppc64le
assert:
that:
- "not yum_result is changed"
- "yum_result is failed"
when:
- ansible_architecture in ['ppc64le']
# setup for testing installing an RPM from url
- set_fact:
pkg_name: fpaste
- name: cleanup
yum:
name: "{{ pkg_name }}"
state: absent
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.7.4.1-2.el7.noarch.rpm
when: ansible_python.version.major == 2
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.9.2-1.fc28.noarch.rpm
when: ansible_python.version.major == 3
# setup end
- name: download an rpm
get_url:
url: "{{ pkg_url }}"
dest: "/tmp/{{ pkg_name }}.rpm"
- name: install the downloaded rpm
yum:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
disable_gpg_check: true
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the downloaded rpm again
yum:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: clean up
yum:
name: "{{ pkg_name }}"
state: absent
- name: install from url
yum:
name: "{{ pkg_url }}"
state: present
disable_gpg_check: true
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: Create a temp RPM file which does not contain nevra information
file:
name: "/tmp/non_existent_pkg.rpm"
state: touch
- name: Try installing RPM file which does not contain nevra information
yum:
name: "/tmp/non_existent_pkg.rpm"
state: present
register: no_nevra_info_result
ignore_errors: yes
- name: Verify RPM failed to install
assert:
that:
- "'changed' in no_nevra_info_result"
- "'msg' in no_nevra_info_result"
- name: Delete a temp RPM file
file:
name: "/tmp/non_existent_pkg.rpm"
state: absent
- name: get yum version
yum:
list: yum
register: yum_version
- name: set yum_version of installed version
set_fact:
yum_version: "{%- if item.yumstate == 'installed' -%}{{ item.version }}{%- else -%}{{ yum_version }}{%- endif -%}"
with_items: "{{ yum_version.results }}"
- name: Ensure double uninstall of wildcard globs works
block:
- name: "Install lohit-*-fonts"
yum:
name: "lohit-*-fonts"
state: present
- name: "Remove lohit-*-fonts (1st time)"
yum:
name: "lohit-*-fonts"
state: absent
register: remove_lohit_fonts_1
- name: "Verify lohit-*-fonts (1st time)"
assert:
that:
- "remove_lohit_fonts_1 is changed"
- "'msg' in remove_lohit_fonts_1"
- "'results' in remove_lohit_fonts_1"
- name: "Remove lohit-*-fonts (2nd time)"
yum:
name: "lohit-*-fonts"
state: absent
register: remove_lohit_fonts_2
- name: "Verify lohit-*-fonts (2nd time)"
assert:
that:
- "remove_lohit_fonts_2 is not changed"
- "'msg' in remove_lohit_fonts_2"
- "'results' in remove_lohit_fonts_2"
- "'lohit-*-fonts is not installed' in remove_lohit_fonts_2['results']"
- block:
- name: uninstall bc
yum: name=bc state=removed
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: verify bc is uninstalled
assert:
that:
- "rpm_bc_result is failed"
- name: exclude bc (yum backend)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=)(.)*
line: "exclude=bc*"
state: present
when: ansible_pkg_mgr == 'yum'
- name: exclude bc (dnf backend)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=)(.)*
line: "excludepkgs=bc*"
state: present
when: ansible_pkg_mgr == 'dnf'
# begin test case where disable_excludes is supported
- name: Try install bc without disable_excludes
yum: name=bc state=latest
register: yum_bc_result
ignore_errors: True
- name: verify bc did not install because it is in exclude list
assert:
that:
- "yum_bc_result is failed"
- name: install bc with disable_excludes
yum: name=bc state=latest disable_excludes=all
register: yum_bc_result_using_excludes
- name: verify bc did install using disable_excludes=all
assert:
that:
- "yum_bc_result_using_excludes is success"
- "yum_bc_result_using_excludes is changed"
- "yum_bc_result_using_excludes is not failed"
- name: remove exclude bc (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=bc*)
line: "exclude="
state: present
when: ansible_pkg_mgr == 'yum'
- name: remove exclude bc (cleanup dnf.conf)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=bc*)
line: "excludepkgs="
state: present
when: ansible_pkg_mgr == 'dnf'
# Fedora < 26 has a bug in dnf where package excludes in dnf.conf aren't
# actually honored and those releases are EOL'd so we have no expectation they
# will ever be fixed
when: not ((ansible_distribution == "Fedora") and (ansible_distribution_major_version|int < 26))
- name: Check that packages with Provides are handled correctly in state=absent
block:
- name: Install test packages
yum:
name:
- https://ansible-ci-files.s3.amazonaws.com/test/integration/targets/yum/test-package-that-provides-toaster-1.3.3.7-1.el7.noarch.rpm
- https://ansible-ci-files.s3.amazonaws.com/test/integration/targets/yum/toaster-1.2.3.4-1.el7.noarch.rpm
disable_gpg_check: true
register: install
- name: Remove toaster
yum:
name: toaster
state: absent
register: remove
- name: rpm -qa
command: rpm -qa
register: rpmqa
- assert:
that:
- install is successful
- install is changed
- remove is successful
- remove is changed
- "'toaster-1.2.3.4' not in rpmqa.stdout"
- "'test-package-that-provides-toaster' in rpmqa.stdout"
always:
- name: Remove test packages
yum:
name:
- test-package-that-provides-toaster
- toaster
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,368 |
Add CI platform: rhel/7.9
|
##### SUMMARY
Replace the `rhel/7.8` platform in the test matrix with `rhel/7.9`.
RHEL 7.9 was released in September, 2020.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
shippable.yml
|
https://github.com/ansible/ansible/issues/72368
|
https://github.com/ansible/ansible/pull/72558
|
fa2be89cd44f0c867f24351c3ba73d5e849cb507
|
d451433e5d96c9f2f8cbe30c316c128ff591edf1
| 2020-10-28T00:14:19Z |
python
| 2020-11-10T06:53:22Z |
test/lib/ansible_test/_data/completion/remote.txt
|
freebsd/11.1 python=2.7,3.6 python_dir=/usr/local/bin
freebsd/12.1 python=3.6,2.7 python_dir=/usr/local/bin
freebsd/12.2 python=3.7,2.7 python_dir=/usr/local/bin
osx/10.11 python=2.7 python_dir=/usr/local/bin
macos/10.15 python=3.8 python_dir=/usr/local/bin
rhel/7.6 python=2.7
rhel/7.8 python=2.7
rhel/8.1 python=3.6
rhel/8.2 python=3.6
aix/7.2 python=2.7 httptester=disabled temp-unicode=disabled pip-check=disabled
power/centos/7 python=2.7
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,257 |
ansible-doc crashes on some invalid collection names
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-doc --list` crashes on https://github.com/ansible/ansible/blob/d18901dd4a11180d0204a43e4ccedc928293299f/lib/ansible/collections/list.py#L70 when the collection name contains more than one `.`, e.g. if someone passes the FQCN of a module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-doc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.11.0.dev0 (devel d18901dd4a) last updated 2020/10/19 23:49:18 (GMT +000)
config file = None
configured module search path = [u'/home/zeke/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/zeke/git/ansible/lib/ansible
ansible collection location = /home/zeke/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zeke/git/ansible/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
mkdir -p ~/.ansible/collections/ansible_collections # This bug won't trigger unless at least one configured collection directory exists
ansible-doc -vvv --list foo.bar.baz
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
[WARNING]: No plugins found.
```
(or a more specific warning about the invalid collection name.)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ERROR! Unexpected Exception, this is probably a bug: too many values to unpack
the full traceback was:
Traceback (most recent call last):
File "/home/zeke/git/ansible/bin/ansible-doc", line 125, in <module>
exit_code = cli.run()
File "/home/zeke/git/ansible/lib/ansible/cli/doc.py", line 215, in run
add_collection_plugins(self.plugin_list, plugin_type, coll_filter=coll_filter)
File "/home/zeke/git/ansible/lib/ansible/cli/doc.py", line 55, in add_collection_plugins
for b_path in b_colldirs:
File "/home/zeke/git/ansible/lib/ansible/collections/list.py", line 70, in list_collection_dirs
(nsp, coll) = coll_filter.split('.')
ValueError: too many values to unpack
```
|
https://github.com/ansible/ansible/issues/72257
|
https://github.com/ansible/ansible/pull/72296
|
48c08f410cd368b129fed61f9a58a0cc2b1df458
|
4f0e2fff957c67415958e71a03fe4fc7dabce87e
| 2020-10-20T00:03:16Z |
python
| 2020-11-10T16:46:15Z |
changelogs/fragments/skip_invalid_coll_name_when_listing.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,257 |
ansible-doc crashes on some invalid collection names
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-doc --list` crashes on https://github.com/ansible/ansible/blob/d18901dd4a11180d0204a43e4ccedc928293299f/lib/ansible/collections/list.py#L70 when the collection name contains more than one `.`, e.g. if someone passes the FQCN of a module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-doc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.11.0.dev0 (devel d18901dd4a) last updated 2020/10/19 23:49:18 (GMT +000)
config file = None
configured module search path = [u'/home/zeke/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/zeke/git/ansible/lib/ansible
ansible collection location = /home/zeke/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zeke/git/ansible/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
mkdir -p ~/.ansible/collections/ansible_collections # This bug won't trigger unless at least one configured collection directory exists
ansible-doc -vvv --list foo.bar.baz
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
[WARNING]: No plugins found.
```
(or a more specific warning about the invalid collection name.)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ERROR! Unexpected Exception, this is probably a bug: too many values to unpack
the full traceback was:
Traceback (most recent call last):
File "/home/zeke/git/ansible/bin/ansible-doc", line 125, in <module>
exit_code = cli.run()
File "/home/zeke/git/ansible/lib/ansible/cli/doc.py", line 215, in run
add_collection_plugins(self.plugin_list, plugin_type, coll_filter=coll_filter)
File "/home/zeke/git/ansible/lib/ansible/cli/doc.py", line 55, in add_collection_plugins
for b_path in b_colldirs:
File "/home/zeke/git/ansible/lib/ansible/collections/list.py", line 70, in list_collection_dirs
(nsp, coll) = coll_filter.split('.')
ValueError: too many values to unpack
```
|
https://github.com/ansible/ansible/issues/72257
|
https://github.com/ansible/ansible/pull/72296
|
48c08f410cd368b129fed61f9a58a0cc2b1df458
|
4f0e2fff957c67415958e71a03fe4fc7dabce87e
| 2020-10-20T00:03:16Z |
python
| 2020-11-10T16:46:15Z |
lib/ansible/collections/list.py
|
# (c) 2019 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from collections import defaultdict
from ansible.collections import is_collection_path
from ansible.module_utils._text import to_bytes
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
display = Display()
def list_valid_collection_paths(search_paths=None, warn=False):
"""
Filter out non existing or invalid search_paths for collections
:param search_paths: list of text-string paths, if none load default config
:param warn: display warning if search_path does not exist
:return: subset of original list
"""
if search_paths is None:
search_paths = []
search_paths.extend(AnsibleCollectionConfig.collection_paths)
for path in search_paths:
b_path = to_bytes(path)
if not os.path.exists(b_path):
# warn for missing, but not if default
if warn:
display.warning("The configured collection path {0} does not exist.".format(path))
continue
if not os.path.isdir(b_path):
if warn:
display.warning("The configured collection path {0}, exists, but it is not a directory.".format(path))
continue
yield path
def list_collection_dirs(search_paths=None, coll_filter=None):
"""
Return paths for the specific collections found in passed or configured search paths
:param search_paths: list of text-string paths, if none load default config
:param coll_filter: limit collections to just the specific namespace or collection, if None all are returned
:return: list of collection directory paths
"""
collections = defaultdict(dict)
for path in list_valid_collection_paths(search_paths):
b_path = to_bytes(path)
if os.path.isdir(b_path):
b_coll_root = to_bytes(os.path.join(path, 'ansible_collections'))
if os.path.exists(b_coll_root) and os.path.isdir(b_coll_root):
coll = None
if coll_filter is None:
namespaces = os.listdir(b_coll_root)
else:
if '.' in coll_filter:
(nsp, coll) = coll_filter.split('.')
else:
nsp = coll_filter
namespaces = [nsp]
for ns in namespaces:
b_namespace_dir = os.path.join(b_coll_root, to_bytes(ns))
if os.path.isdir(b_namespace_dir):
if coll is None:
colls = os.listdir(b_namespace_dir)
else:
colls = [coll]
for collection in colls:
# skip dupe collections as they will be masked in execution
if collection not in collections[ns]:
b_coll = to_bytes(collection)
b_coll_dir = os.path.join(b_namespace_dir, b_coll)
if is_collection_path(b_coll_dir):
collections[ns][collection] = b_coll_dir
yield b_coll_dir
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,257 |
ansible-doc crashes on some invalid collection names
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible-doc --list` crashes on https://github.com/ansible/ansible/blob/d18901dd4a11180d0204a43e4ccedc928293299f/lib/ansible/collections/list.py#L70 when the collection name contains more than one `.`, e.g. if someone passes the FQCN of a module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-doc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.11.0.dev0 (devel d18901dd4a) last updated 2020/10/19 23:49:18 (GMT +000)
config file = None
configured module search path = [u'/home/zeke/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/zeke/git/ansible/lib/ansible
ansible collection location = /home/zeke/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zeke/git/ansible/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
libyaml = True
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
mkdir -p ~/.ansible/collections/ansible_collections # This bug won't trigger unless at least one configured collection directory exists
ansible-doc -vvv --list foo.bar.baz
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
[WARNING]: No plugins found.
```
(or a more specific warning about the invalid collection name.)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ERROR! Unexpected Exception, this is probably a bug: too many values to unpack
the full traceback was:
Traceback (most recent call last):
File "/home/zeke/git/ansible/bin/ansible-doc", line 125, in <module>
exit_code = cli.run()
File "/home/zeke/git/ansible/lib/ansible/cli/doc.py", line 215, in run
add_collection_plugins(self.plugin_list, plugin_type, coll_filter=coll_filter)
File "/home/zeke/git/ansible/lib/ansible/cli/doc.py", line 55, in add_collection_plugins
for b_path in b_colldirs:
File "/home/zeke/git/ansible/lib/ansible/collections/list.py", line 70, in list_collection_dirs
(nsp, coll) = coll_filter.split('.')
ValueError: too many values to unpack
```
|
https://github.com/ansible/ansible/issues/72257
|
https://github.com/ansible/ansible/pull/72296
|
48c08f410cd368b129fed61f9a58a0cc2b1df458
|
4f0e2fff957c67415958e71a03fe4fc7dabce87e
| 2020-10-20T00:03:16Z |
python
| 2020-11-10T16:46:15Z |
test/integration/targets/ansible-doc/runme.sh
|
#!/usr/bin/env bash
set -eux
ansible-playbook test.yml -i inventory "$@"
# test keyword docs
ansible-doc -t keyword -l | grep 'vars_prompt: list of variables to prompt for.'
ansible-doc -t keyword vars_prompt | grep 'description: list of variables to prompt for.'
ansible-doc -t keyword asldkfjaslidfhals 2>&1 | grep 'Skipping Invalid keyword'
# collections testing
(
unset ANSIBLE_PLAYBOOK_DIR
cd "$(dirname "$0")"
# test module docs from collection
current_out="$(ansible-doc --playbook-dir ./ testns.testcol.fakemodule)"
expected_out="$(cat fakemodule.output)"
test "$current_out" == "$expected_out"
# test listing diff plugin types from collection
for ptype in cache inventory lookup vars
do
# each plugin type adds 1 from collection
# FIXME pre=$(ansible-doc -l -t ${ptype}|wc -l)
# FIXME post=$(ansible-doc -l -t ${ptype} --playbook-dir ./|wc -l)
# FIXME test "$pre" -eq $((post - 1))
# ensure we ONLY list from the collection
justcol=$(ansible-doc -l -t ${ptype} --playbook-dir ./ testns.testcol|wc -l)
test "$justcol" -eq 1
# ensure we get 0 plugins when restricting to collection, but not supplying it
justcol=$(ansible-doc -l -t ${ptype} testns.testcol|wc -l)
test "$justcol" -eq 0
# ensure we get 1 plugins when restricting namespace
justcol=$(ansible-doc -l -t ${ptype} --playbook-dir ./ testns|wc -l)
test "$justcol" -eq 1
done
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,445 |
unarchive module: missing documentation on return values
|
##### SUMMARY
The unarchive module is missing documentation on its return values.
Only the `list_files` option mentions it at all:
> If set to True, return the list of files that are contained in the tarball.
It does not mention how this list is returned (Spoiler: It's in the `files` key, as a list of file names).
A `RETURN` section should be added to the module documentation.
Ideally, the `EXAMPLES` section should be extended to include one or two examples that use the return values somehow.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
modules/files/unarchive.py
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib64/python3.6/site-packages/ansible
executable location = /usr/lib/python-exec/python3.6/ansible
python version = 3.6.9 (default, Oct 5 2019, 11:39:46) [GCC 8.3.0]
```
##### CONFIGURATION
not relevant
##### OS / ENVIRONMENT
not relevant
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/67445
|
https://github.com/ansible/ansible/pull/72546
|
a5eb788578b01672c5a8dc1fd04b23aaea9ff828
|
44a38c9f33e454370d6834304200023b7a4a39ad
| 2020-02-16T08:33:13Z |
python
| 2020-11-10T22:27:39Z |
changelogs/fragments/72546-unarchive-returndoc.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,445 |
unarchive module: missing documentation on return values
|
##### SUMMARY
The unarchive module is missing documentation on its return values.
Only the `list_files` option mentions it at all:
> If set to True, return the list of files that are contained in the tarball.
It does not mention how this list is returned (Spoiler: It's in the `files` key, as a list of file names).
A `RETURN` section should be added to the module documentation.
Ideally, the `EXAMPLES` section should be extended to include one or two examples that use the return values somehow.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
modules/files/unarchive.py
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib64/python3.6/site-packages/ansible
executable location = /usr/lib/python-exec/python3.6/ansible
python version = 3.6.9 (default, Oct 5 2019, 11:39:46) [GCC 8.3.0]
```
##### CONFIGURATION
not relevant
##### OS / ENVIRONMENT
not relevant
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/67445
|
https://github.com/ansible/ansible/pull/72546
|
a5eb788578b01672c5a8dc1fd04b23aaea9ff828
|
44a38c9f33e454370d6834304200023b7a4a39ad
| 2020-02-16T08:33:13Z |
python
| 2020-11-10T22:27:39Z |
lib/ansible/modules/unarchive.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2013, Dylan Martin <[email protected]>
# Copyright: (c) 2015, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2016, Dag Wieers <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: unarchive
version_added: '1.4'
short_description: Unpacks an archive after (optionally) copying it from the local machine.
description:
- The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive.
- By default, it will copy the source file from the local system to the target before unpacking.
- Set C(remote_src=yes) to unpack an archive which already exists on the target.
- If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes).
- For Windows targets, use the M(community.windows.win_unzip) module instead.
options:
src:
description:
- If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the
target server to existing archive file to unpack.
- If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for
simple cases, for full download support use the M(ansible.builtin.get_url) module.
type: path
required: true
dest:
description:
- Remote absolute path where the archive should be unpacked.
type: path
required: true
copy:
description:
- If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine.
- This option has been deprecated in favor of C(remote_src).
- This option is mutually exclusive with C(remote_src).
type: bool
default: yes
creates:
description:
- If the specified absolute path (file or directory) already exists, this step will B(not) be run.
type: path
version_added: "1.6"
list_files:
description:
- If set to True, return the list of files that are contained in the tarball.
type: bool
default: no
version_added: "2.0"
exclude:
description:
- List the directory and file entries that you would like to exclude from the unarchive action.
type: list
elements: str
version_added: "2.1"
keep_newer:
description:
- Do not replace existing files that are newer than files from the archive.
type: bool
default: no
version_added: "2.1"
extra_opts:
description:
- Specify additional options by passing in an array.
- Each space-separated command-line option should be a new element of the array. See examples.
- Command-line options with multiple elements must use multiple lines in the array, one for each element.
type: list
elements: str
default: ""
version_added: "2.1"
remote_src:
description:
- Set to C(yes) to indicate the archived file is already on the remote system and not local to the Ansible controller.
- This option is mutually exclusive with C(copy).
type: bool
default: no
version_added: "2.2"
validate_certs:
description:
- This only applies if using a https URL as the source of the file.
- This should only set to C(no) used on personally controlled sites using self-signed certificate.
- Prior to 2.2 the code worked as if this was set to C(yes).
type: bool
default: yes
version_added: "2.2"
extends_documentation_fragment:
- decrypt
- files
todo:
- Re-implement tar support using native tarfile module.
- Re-implement zip support using native zipfile module.
notes:
- Requires C(zipinfo) and C(gtar)/C(unzip) command on target host.
- Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2) and I(.tar.xz) files using C(gtar).
- Does not handle I(.gz) files, I(.bz2) files or I(.xz) files that do not contain a I(.tar) archive.
- Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not
supported, it will always unpack the archive.
- Existing files/directories in the destination which are not in the archive
are not touched. This is the same behavior as a normal archive extraction.
- Existing files/directories in the destination which are not in the archive
are ignored for purposes of deciding if the archive should be unpacked or not.
seealso:
- module: community.general.archive
- module: community.general.iso_extract
- module: community.windows.win_unzip
author: Michael DeHaan
'''
EXAMPLES = r'''
- name: Extract foo.tgz into /var/lib/foo
unarchive:
src: foo.tgz
dest: /var/lib/foo
- name: Unarchive a file that is already on the remote machine
unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file that needs to be downloaded (added in 2.0)
unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file with extra options
unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
extra_opts:
- --transform
- s/^xxx/yyy/
'''
import binascii
import codecs
import datetime
import fnmatch
import grp
import os
import platform
import pwd
import re
import stat
import time
import traceback
from zipfile import ZipFile, BadZipfile
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_file
from ansible.module_utils._text import to_bytes, to_native, to_text
try: # python 3.3+
from shlex import quote
except ImportError: # older python
from pipes import quote
# String from tar that shows the tar contents are different from the
# filesystem
OWNER_DIFF_RE = re.compile(r': Uid differs$')
GROUP_DIFF_RE = re.compile(r': Gid differs$')
MODE_DIFF_RE = re.compile(r': Mode differs$')
MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$')
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$')
MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$')
ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}')
INVALID_OWNER_RE = re.compile(r': Invalid owner')
INVALID_GROUP_RE = re.compile(r': Invalid group')
def crc32(path):
''' Return a CRC32 checksum of a file '''
with open(path, 'rb') as f:
file_content = f.read()
return binascii.crc32(file_content) & 0xffffffff
def shell_escape(string):
''' Quote meta-characters in the args for the unix shell '''
return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string)
class UnarchiveError(Exception):
pass
class ZipArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
self.excludes = module.params['exclude']
self.includes = []
self.cmd_path = self.module.get_bin_path('unzip')
self.zipinfocmd_path = self.module.get_bin_path('zipinfo')
self._files_in_archive = []
self._infodict = dict()
def _permstr_to_octal(self, modestr, umask):
''' Convert a Unix permission string (rw-r--r--) into a mode (0644) '''
revstr = modestr[::-1]
mode = 0
for j in range(0, 3):
for i in range(0, 3):
if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']:
mode += 2 ** (i + 3 * j)
# The unzip utility does not support setting the stST bits
# if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]:
# mode += 2 ** (9 + j)
return (mode & ~umask)
def _legacy_file_list(self):
unzip_bin = self.module.get_bin_path('unzip')
if not unzip_bin:
raise UnarchiveError('Python Zipfile cannot read %s and unzip not found' % self.src)
rc, out, err = self.module.run_command([unzip_bin, '-v', self.src])
if rc:
raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src)
for line in out.splitlines()[3:-2]:
fields = line.split(None, 7)
self._files_in_archive.append(fields[7])
self._infodict[fields[7]] = int(fields[6])
def _crc32(self, path):
if self._infodict:
return self._infodict[path]
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for item in archive.infolist():
self._infodict[item.filename] = int(item.CRC)
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
return self._infodict[path]
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
self._files_in_archive = []
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for member in archive.namelist():
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(member, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(member))
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
archive.close()
return self._files_in_archive
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
cmd = [self.zipinfocmd_path, '-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
rc, out, err = self.module.run_command(cmd)
old_out = out
diff = ''
out = ''
if rc == 0:
unarchived = True
else:
unarchived = False
# Get some information related to user/group ownership
umask = os.umask(0)
os.umask(umask)
systemtype = platform.system()
# Get current user and group information
groups = os.getgroups()
run_uid = os.getuid()
run_gid = os.getgid()
try:
run_owner = pwd.getpwuid(run_uid).pw_name
except (TypeError, KeyError):
run_owner = run_uid
try:
run_group = grp.getgrgid(run_gid).gr_name
except (KeyError, ValueError, OverflowError):
run_group = run_gid
# Get future user ownership
fut_owner = fut_uid = None
if self.file_args['owner']:
try:
tpw = pwd.getpwnam(self.file_args['owner'])
except KeyError:
try:
tpw = pwd.getpwuid(int(self.file_args['owner']))
except (TypeError, KeyError, ValueError):
tpw = pwd.getpwuid(run_uid)
fut_owner = tpw.pw_name
fut_uid = tpw.pw_uid
else:
try:
fut_owner = run_owner
except Exception:
pass
fut_uid = run_uid
# Get future group ownership
fut_group = fut_gid = None
if self.file_args['group']:
try:
tgr = grp.getgrnam(self.file_args['group'])
except (ValueError, KeyError):
try:
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
tgr = grp.getgrgid(int(self.file_args['group']))
except (KeyError, ValueError, OverflowError):
tgr = grp.getgrgid(run_gid)
fut_group = tgr.gr_name
fut_gid = tgr.gr_gid
else:
try:
fut_group = run_group
except Exception:
pass
fut_gid = run_gid
for line in old_out.splitlines():
change = False
pcs = line.split(None, 7)
if len(pcs) != 8:
# Too few fields... probably a piece of the header or footer
continue
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
if len(pcs[6]) != 15:
continue
# Possible entries:
# -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660
# -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs
# -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF
# --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr
if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'):
continue
ztype = pcs[0][0]
permstr = pcs[0][1:]
version = pcs[1]
ostype = pcs[2]
size = int(pcs[3])
path = to_text(pcs[7], errors='surrogate_or_strict')
# Skip excluded files
if path in self.excludes:
out += 'Path %s is excluded on request\n' % path
continue
# Itemized change requires L for symlink
if path[-1] == '/':
if ztype != 'd':
err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype)
ftype = 'd'
elif ztype == 'l':
ftype = 'L'
elif ztype == '-':
ftype = 'f'
elif ztype == '?':
ftype = 'f'
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666.
# This permission will then be modified by the system UMask.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal
if len(permstr) == 6:
if path[-1] == '/':
permstr = 'rwxrwxrwx'
elif permstr == 'rwx---':
permstr = 'rwxrwxrwx'
else:
permstr = 'rw-rw-rw-'
file_umask = umask
elif 'bsd' in systemtype.lower():
file_umask = umask
else:
file_umask = 0
# Test string conformity
if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr):
raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr)
# DEBUG
# err += "%s%s %10d %s\n" % (ztype, permstr, size, path)
b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict'))
try:
st = os.lstat(b_dest)
except Exception:
change = True
self.includes.append(path)
err += 'Path %s is missing\n' % path
diff += '>%s++++++.?? %s\n' % (ftype, path)
continue
# Compare file types
if ftype == 'd' and not stat.S_ISDIR(st.st_mode):
change = True
self.includes.append(path)
err += 'File %s already exists, but not as a directory\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'f' and not stat.S_ISREG(st.st_mode):
change = True
unarchived = False
self.includes.append(path)
err += 'Directory %s already exists, but not as a regular file\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'L' and not stat.S_ISLNK(st.st_mode):
change = True
self.includes.append(path)
err += 'Directory %s already exists, but not as a symlink\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
itemized = list('.%s.......??' % ftype)
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6]))
timestamp = time.mktime(dt_object.timetuple())
# Compare file timestamps
if stat.S_ISREG(st.st_mode):
if self.module.params['keep_newer']:
if timestamp > st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s is older, replacing file\n' % path
itemized[4] = 't'
elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime:
# Add to excluded files, ignore other changes
out += 'File %s is newer, excluding file\n' % path
self.excludes.append(path)
continue
else:
if timestamp != st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime)
itemized[4] = 't'
# Compare file sizes
if stat.S_ISREG(st.st_mode) and size != st.st_size:
change = True
err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size)
itemized[3] = 's'
# Compare file checksums
if stat.S_ISREG(st.st_mode):
crc = crc32(b_dest)
if crc != self._crc32(path):
change = True
err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc)
itemized[2] = 'c'
# Compare file permissions
# Do not handle permissions of symlinks
if ftype != 'L':
# Use the new mode provided with the action, if there is one
if self.file_args['mode']:
if isinstance(self.file_args['mode'], int):
mode = self.file_args['mode']
else:
try:
mode = int(self.file_args['mode'], 8)
except Exception as e:
try:
mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode'])
except ValueError as e:
self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc())
# Only special files require no umask-handling
elif ztype == '?':
mode = self._permstr_to_octal(permstr, 0)
else:
mode = self._permstr_to_octal(permstr, file_umask)
if mode != stat.S_IMODE(st.st_mode):
change = True
itemized[5] = 'p'
err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode))
# Compare file user ownership
owner = uid = None
try:
owner = pwd.getpwuid(st.st_uid).pw_name
except (TypeError, KeyError):
uid = st.st_uid
# If we are not root and requested owner is not our user, fail
if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid):
raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner))
if owner and owner != fut_owner:
change = True
err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner)
itemized[6] = 'o'
elif uid and uid != fut_uid:
change = True
err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid)
itemized[6] = 'o'
# Compare file group ownership
group = gid = None
try:
group = grp.getgrgid(st.st_gid).gr_name
except (KeyError, ValueError, OverflowError):
gid = st.st_gid
if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups:
raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner))
if group and group != fut_group:
change = True
err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group)
itemized[6] = 'g'
elif gid and gid != fut_gid:
change = True
err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid)
itemized[6] = 'g'
# Register changed files and finalize diff output
if change:
if path not in self.includes:
self.includes.append(path)
diff += '%s %s\n' % (''.join(itemized), path)
if self.includes:
unarchived = False
# DEBUG
# out = old_out + out
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff)
def unarchive(self):
cmd = [self.cmd_path, '-o']
if self.opts:
cmd.extend(self.opts)
cmd.append(self.src)
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
if self.excludes:
cmd.extend(['-x'] + self.excludes)
cmd.extend(['-d', self.b_dest])
rc, out, err = self.module.run_command(cmd)
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
if not self.cmd_path:
return False, 'Command "unzip" not found.'
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True, None
return False, 'Command "%s" could not handle archive.' % self.cmd_path
class TgzArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
if self.module.check_mode:
self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name)
self.excludes = [path.rstrip('/') for path in self.module.params['exclude']]
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
self.cmd_path = self.module.get_bin_path('gtar', None)
if not self.cmd_path:
# Fallback to tar
self.cmd_path = self.module.get_bin_path('tar')
self.zipflag = '-z'
self._files_in_archive = []
if self.cmd_path:
self.tar_type = self._get_tar_type()
else:
self.tar_type = None
def _get_tar_type(self):
cmd = [self.cmd_path, '--version']
(rc, out, err) = self.module.run_command(cmd)
tar_type = None
if out.startswith('bsdtar'):
tar_type = 'bsd'
elif out.startswith('tar') and 'GNU' in out:
tar_type = 'gnu'
return tar_type
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
cmd = [self.cmd_path, '--list', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C'))
if rc != 0:
raise UnarchiveError('Unable to list files in the archive')
for filename in out.splitlines():
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
filename = to_native(codecs.escape_decode(filename)[0])
# We don't allow absolute filenames. If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'". This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
if filename.startswith('/'):
filename = filename[1:]
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(filename, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(filename))
return self._files_in_archive
def is_unarchived(self):
cmd = [self.cmd_path, '--diff', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C'))
# Check whether the differences are in something that we're
# setting anyway
# What is different
unarchived = True
old_out = out
out = ''
run_uid = os.getuid()
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
for line in old_out.splitlines() + err.splitlines():
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
if EMPTY_FILE_RE.search(line):
continue
if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line):
out += line + '\n'
if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line):
out += line + '\n'
if not self.file_args['mode'] and MODE_DIFF_RE.search(line):
out += line + '\n'
if MOD_TIME_DIFF_RE.search(line):
out += line + '\n'
if MISSING_FILE_RE.search(line):
out += line + '\n'
if INVALID_OWNER_RE.search(line):
out += line + '\n'
if INVALID_GROUP_RE.search(line):
out += line + '\n'
if out:
unarchived = False
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd)
def unarchive(self):
cmd = [self.cmd_path, '--extract', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C'))
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
if not self.cmd_path:
return False, 'Commands "gtar" and "tar" not found.'
if self.tar_type != 'gnu':
return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type)
try:
if self.files_in_archive:
return True, None
except UnarchiveError:
return False, 'Command "%s" could not handle archive.' % self.cmd_path
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path
# Class to handle tar files that aren't compressed
class TarArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarArchive, self).__init__(src, b_dest, file_args, module)
# argument to tar
self.zipflag = ''
# Class to handle bzip2 compressed tar files
class TarBzipArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarBzipArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-j'
# Class to handle xz compressed tar files
class TarXzArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarXzArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-J'
# try handlers in order and return the one that works or bail if none work
def pick_handler(src, dest, file_args, module):
handlers = [ZipArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive]
reasons = set()
for handler in handlers:
obj = handler(src, dest, file_args, module)
(can_handle, reason) = obj.can_handle_archive()
if can_handle:
return obj
reasons.add(reason)
reason_msg = ' '.join(reasons)
module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed. %s' % (src, reason_msg))
def main():
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path', required=True),
dest=dict(type='path', required=True),
remote_src=dict(type='bool', default=False),
creates=dict(type='path'),
list_files=dict(type='bool', default=False),
keep_newer=dict(type='bool', default=False),
exclude=dict(type='list', elements='str', default=[]),
extra_opts=dict(type='list', elements='str', default=[]),
validate_certs=dict(type='bool', default=True),
),
add_file_common_args=True,
# check-mode only works for zip files, we cover that later
supports_check_mode=True,
)
src = module.params['src']
dest = module.params['dest']
b_dest = to_bytes(dest, errors='surrogate_or_strict')
remote_src = module.params['remote_src']
file_args = module.load_file_common_arguments(module.params)
# did tar file arrive?
if not os.path.exists(src):
if not remote_src:
module.fail_json(msg="Source '%s' failed to transfer" % src)
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
elif '://' in src:
src = fetch_file(module, src)
else:
module.fail_json(msg="Source '%s' does not exist" % src)
if not os.access(src, os.R_OK):
module.fail_json(msg="Source '%s' not readable" % src)
# skip working with 0 size archives
try:
if os.path.getsize(src) == 0:
module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src)
except Exception as e:
module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e)))
# is dest OK to receive tar file?
if not os.path.isdir(b_dest):
module.fail_json(msg="Destination '%s' is not a directory" % dest)
handler = pick_handler(src, b_dest, file_args, module)
res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src)
# do we need to do unpack?
check_results = handler.is_unarchived()
# DEBUG
# res_args['check_results'] = check_results
if module.check_mode:
res_args['changed'] = not check_results['unarchived']
elif check_results['unarchived']:
res_args['changed'] = False
else:
# do the unpack
try:
res_args['extract_results'] = handler.unarchive()
if res_args['extract_results']['rc'] != 0:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
except IOError:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
else:
res_args['changed'] = True
# Get diff if required
if check_results.get('diff', False):
res_args['diff'] = {'prepared': check_results['diff']}
# Run only if we found differences (idempotence) or diff was missing
if res_args.get('diff', True) and not module.check_mode:
# do we need to change perms?
for filename in handler.files_in_archive:
file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if module.params['list_files']:
res_args['files'] = handler.files_in_archive
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,612 |
KeyError: 'PATH' on executing ping Module
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I am trying to run a ping module on hosts but it returns with error, I will provide yaml and hosts below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ping
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
ansible 2.5.1
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_BECOME(/var/www/ansible_directory/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/var/www/ansible_directory/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/var/www/ansible_directory/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/var/www/ansible_directory/ansible.cfg) = root
DEFAULT_HOST_LIST(/var/www/ansible_directory/ansible.cfg) = [u'/var/www/ansible_directory/hosts']
DEFAULT_LOAD_CALLBACK_PLUGINS(/var/www/ansible_directory/ansible.cfg) = True
DEFAULT_REMOTE_USER(/var/www/ansible_directory/ansible.cfg) = bukanansible
DEFAULT_ROLES_PATH(/var/www/ansible_directory/ansible.cfg) = [u'/var/www/ansible_directory/roles']
DEFAULT_STDOUT_CALLBACK(/var/www/ansible_directory/ansible.cfg) = yaml
HOST_KEY_CHECKING(/var/www/ansible_directory/ansible.cfg) = False
```
##### OS / ENVIRONMENT
it should be linux based
##### STEPS TO REPRODUCE
run
```
`shell_exec("cd /var/www/ansible_directory/ && ansible -m ping all -vvvv "`
```
on scriptcase via onExecute
hosts file
```
[&hosts_route&]
R01 ansible_host=something.com ansible_connection=network_cli ansible_port=2201 ansible_user=admin ansible_password= ansible_network_os=routeros
```
##### EXPECTED RESULTS
I can ping hosts
##### ACTUAL RESULTS
```
ansible 2.5.1
config file = /var/www/ansible_directory/ansible.cfg
configured module search path = [u'/var/www/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Sep 30 2020, 13:38:04) [GCC 7.5.0]
Using /var/www/ansible_directory/ansible.cfg as config file
setting up inventory plugins
Parsed /var/www/ansible_directory/hosts inventory source with ini plugin
Loading callback plugin yaml of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/yaml.pyc
PLAY [Ansible Ad-Hoc] **********************************************************
META: ran handlers
TASK [ping] ********************************************************************
attempting to start connection
using connection plugin network_cli
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 138, in run
res = self._execute()
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 516, in _execute
self._connection = self._get_connection(variables=variables, templar=templar)
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 790, in _get_connection
socket_path = self._start_connection()
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 877, in _start_connection
[python, find_file_in_path('ansible-connection'), to_text(os.getppid())],
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 868, in find_file_in_path
paths = os.environ['PATH'].split(os.pathsep) + [os.path.dirname(sys.argv[0])]
File "/usr/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'PATH'
fatal: [R01]: FAILED! =>
msg: Unexpected failure during module execution.
stdout: ''
PLAY RECAP *********************************************************************
R01 : ok=0 changed=0 unreachable=0 failed=1
```
|
https://github.com/ansible/ansible/issues/72612
|
https://github.com/ansible/ansible/pull/72620
|
ad4ddd8211ad140a371d23c8f871b1e5a0207548
|
07248e5ec1ed7cc7e2c8f77d9a2f635a58eca610
| 2020-11-13T03:09:44Z |
python
| 2020-11-17T17:09:46Z |
changelogs/fragments/better_os_environ_access.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,612 |
KeyError: 'PATH' on executing ping Module
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I am trying to run a ping module on hosts but it returns with error, I will provide yaml and hosts below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ping
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
ansible 2.5.1
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_BECOME(/var/www/ansible_directory/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/var/www/ansible_directory/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/var/www/ansible_directory/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/var/www/ansible_directory/ansible.cfg) = root
DEFAULT_HOST_LIST(/var/www/ansible_directory/ansible.cfg) = [u'/var/www/ansible_directory/hosts']
DEFAULT_LOAD_CALLBACK_PLUGINS(/var/www/ansible_directory/ansible.cfg) = True
DEFAULT_REMOTE_USER(/var/www/ansible_directory/ansible.cfg) = bukanansible
DEFAULT_ROLES_PATH(/var/www/ansible_directory/ansible.cfg) = [u'/var/www/ansible_directory/roles']
DEFAULT_STDOUT_CALLBACK(/var/www/ansible_directory/ansible.cfg) = yaml
HOST_KEY_CHECKING(/var/www/ansible_directory/ansible.cfg) = False
```
##### OS / ENVIRONMENT
it should be linux based
##### STEPS TO REPRODUCE
run
```
`shell_exec("cd /var/www/ansible_directory/ && ansible -m ping all -vvvv "`
```
on scriptcase via onExecute
hosts file
```
[&hosts_route&]
R01 ansible_host=something.com ansible_connection=network_cli ansible_port=2201 ansible_user=admin ansible_password= ansible_network_os=routeros
```
##### EXPECTED RESULTS
I can ping hosts
##### ACTUAL RESULTS
```
ansible 2.5.1
config file = /var/www/ansible_directory/ansible.cfg
configured module search path = [u'/var/www/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Sep 30 2020, 13:38:04) [GCC 7.5.0]
Using /var/www/ansible_directory/ansible.cfg as config file
setting up inventory plugins
Parsed /var/www/ansible_directory/hosts inventory source with ini plugin
Loading callback plugin yaml of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/yaml.pyc
PLAY [Ansible Ad-Hoc] **********************************************************
META: ran handlers
TASK [ping] ********************************************************************
attempting to start connection
using connection plugin network_cli
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 138, in run
res = self._execute()
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 516, in _execute
self._connection = self._get_connection(variables=variables, templar=templar)
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 790, in _get_connection
socket_path = self._start_connection()
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 877, in _start_connection
[python, find_file_in_path('ansible-connection'), to_text(os.getppid())],
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 868, in find_file_in_path
paths = os.environ['PATH'].split(os.pathsep) + [os.path.dirname(sys.argv[0])]
File "/usr/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'PATH'
fatal: [R01]: FAILED! =>
msg: Unexpected failure during module execution.
stdout: ''
PLAY RECAP *********************************************************************
R01 : ok=0 changed=0 unreachable=0 failed=1
```
|
https://github.com/ansible/ansible/issues/72612
|
https://github.com/ansible/ansible/pull/72620
|
ad4ddd8211ad140a371d23c8f871b1e5a0207548
|
07248e5ec1ed7cc7e2c8f77d9a2f635a58eca610
| 2020-11-13T03:09:44Z |
python
| 2020-11-17T17:09:46Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.send_task_result(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
context_validation_error = None
try:
# TODO: remove play_context as this does not take delegation into account, task itself should hold values
# for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
else:
# just use normal host vars
cvars = orig_vars = variables
templar.available_variables = cvars
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
templar.available_variables = orig_vars
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
self._task.action, self._task.args, self._task.module_defaults, templar, self._task._ansible_internal_redirect_list
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_task_result(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs())
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host,
async_task,
async_result,
task_fields=self._task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
self._play_context.connection = templar.template(cvars['ansible_connection'])
else:
self._play_context.connection = self._task.connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, cvars, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, final_vars, templar):
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ['PATH'].split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,612 |
KeyError: 'PATH' on executing ping Module
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I am trying to run a ping module on hosts but it returns with error, I will provide yaml and hosts below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ping
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
ansible 2.5.1
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_BECOME(/var/www/ansible_directory/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/var/www/ansible_directory/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/var/www/ansible_directory/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/var/www/ansible_directory/ansible.cfg) = root
DEFAULT_HOST_LIST(/var/www/ansible_directory/ansible.cfg) = [u'/var/www/ansible_directory/hosts']
DEFAULT_LOAD_CALLBACK_PLUGINS(/var/www/ansible_directory/ansible.cfg) = True
DEFAULT_REMOTE_USER(/var/www/ansible_directory/ansible.cfg) = bukanansible
DEFAULT_ROLES_PATH(/var/www/ansible_directory/ansible.cfg) = [u'/var/www/ansible_directory/roles']
DEFAULT_STDOUT_CALLBACK(/var/www/ansible_directory/ansible.cfg) = yaml
HOST_KEY_CHECKING(/var/www/ansible_directory/ansible.cfg) = False
```
##### OS / ENVIRONMENT
it should be linux based
##### STEPS TO REPRODUCE
run
```
`shell_exec("cd /var/www/ansible_directory/ && ansible -m ping all -vvvv "`
```
on scriptcase via onExecute
hosts file
```
[&hosts_route&]
R01 ansible_host=something.com ansible_connection=network_cli ansible_port=2201 ansible_user=admin ansible_password= ansible_network_os=routeros
```
##### EXPECTED RESULTS
I can ping hosts
##### ACTUAL RESULTS
```
ansible 2.5.1
config file = /var/www/ansible_directory/ansible.cfg
configured module search path = [u'/var/www/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Sep 30 2020, 13:38:04) [GCC 7.5.0]
Using /var/www/ansible_directory/ansible.cfg as config file
setting up inventory plugins
Parsed /var/www/ansible_directory/hosts inventory source with ini plugin
Loading callback plugin yaml of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/yaml.pyc
PLAY [Ansible Ad-Hoc] **********************************************************
META: ran handlers
TASK [ping] ********************************************************************
attempting to start connection
using connection plugin network_cli
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 138, in run
res = self._execute()
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 516, in _execute
self._connection = self._get_connection(variables=variables, templar=templar)
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 790, in _get_connection
socket_path = self._start_connection()
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 877, in _start_connection
[python, find_file_in_path('ansible-connection'), to_text(os.getppid())],
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 868, in find_file_in_path
paths = os.environ['PATH'].split(os.pathsep) + [os.path.dirname(sys.argv[0])]
File "/usr/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'PATH'
fatal: [R01]: FAILED! =>
msg: Unexpected failure during module execution.
stdout: ''
PLAY RECAP *********************************************************************
R01 : ok=0 changed=0 unreachable=0 failed=1
```
|
https://github.com/ansible/ansible/issues/72612
|
https://github.com/ansible/ansible/pull/72620
|
ad4ddd8211ad140a371d23c8f871b1e5a0207548
|
07248e5ec1ed7cc7e2c8f77d9a2f635a58eca610
| 2020-11-13T03:09:44Z |
python
| 2020-11-17T17:09:46Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
handle_aliases,
list_deprecations,
list_no_log_values,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more.
Use of deferred_removals exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see issue #24560).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to sanitize_keys() to build deferred_removals and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
""" Sanitize the keys in a container object by removing no_log values from key names.
This is a companion function to the `remove_values()` function. Similar to that function,
we make use of deferred_removals to avoid hitting maximum recursion depth in cases of
large data structures.
:param obj: The container object to sanitize. Non-container objects are returned unmodified.
:param no_log_strings: A set of string values we do not want logged.
:param ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (errno.EPERM, errno.EROFS): # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
version=deprecation.get('version'), date=deprecation.get('date'),
collection_name=deprecation.get('collection_name'))
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], version=message.get('version'), date=message.get('date'),
collection_name=message.get('collection_name'))
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for k in list(param.keys()):
if k not in legal_inputs:
unsupported_parameters.add(k)
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in param:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(param[param_key]))
else:
setattr(self, PASS_VARS[k][0], param[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
supported_parameters = list()
for key in sorted(spec.keys()):
if 'aliases' in spec[key] and spec[key]['aliases']:
supported_parameters.append("%s (%s)" % (key, ', '.join(sorted(spec[key]['aliases']))))
else:
supported_parameters.append(key)
msg += " Supported parameters include: %s" % (', '.join(supported_parameters))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value, param=None, prefix=''):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
from_msg = '{0!r}'.format(value)
to_msg = '{0!r}'.format(to_text(value))
if param is not None:
if prefix:
param = '{0}{1}'.format(prefix, param)
from_msg = '{0}: {1!r}'.format(param, value)
to_msg = '{0}: {1!r}'.format(param, to_text(value))
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value "{0}" (type {1.__class__.__name__}) was converted to "{2}" (type string). '
'If this does not look like what you expect, {3}').format(from_msg, value, to_msg, common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param, new_prefix)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted, string_types):
if isinstance(param, string_types):
kwargs['param'] = param
elif isinstance(param, dict):
kwargs['param'] = list(param.keys())[0]
for value in values:
try:
validated_params.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None, prefix=''):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(type_checker, string_types):
kwargs['param'] = list(param.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
param[k] = type_checker(value, **kwargs)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd:
if os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,605 |
import_task swallows AnsibleParserError
|
##### SUMMARY
A malformed list item in an included task file yields a cryptic message
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/cli/scripts/ansible_cli_stub.py
##### ANSIBLE VERSION
Confirmed as of c888035e
```
ansible 2.10.0.dev0
config file = None
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /proj/ansible/ansible/venv/lib/python3.7/site-packages/ansible_base-2.10.0.dev0-py3.7.egg/ansible
executable location = /proj/ansible/ansible/venv/bin/ansible
python version = 3.7.7 (default, Mar 11 2020, 11:44:20) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### STEPS TO REPRODUCE
* bug.yml
```yaml
---
- oops
```
* pb.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- include_tasks: bug.yml
```
##### EXPECTED RESULTS
```console
$ ansible-playbook -i localhost, -c local pb.yml
ERROR! Traceback (most recent call last):
File "/proj/ansible/ansible/lib/ansible/playbook/block.py", line 130, in _load_block
use_handlers=self._use_handlers,
File "/proj/ansible/ansible/lib/ansible/playbook/helpers.py", line 106, in load_list_of_tasks
raise AnsibleAssertionError('The ds (%s) should be a dict but was a %s' % (ds, type(ds)))
ansible.errors.AnsibleAssertionError: The ds (['oops']) should be a dict but was a <class 'list'>
```
or something similar, because just _eating_ the `AnsibleParserError` makes tracking down the erroneous list item harder than necessary
##### ACTUAL RESULTS
```console
$ ansible-playbook -i localhost, -c local -vvvvvvvvv pb.yml
ansible-playbook 2.10.0.dev0
config file = None
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /proj/ansible/ansible/lib/ansible
executable location = /proj/ansible/ansible/venv/bin/ansible-playbook
python version = 3.7.7 (default, Mar 11 2020, 11:44:20) [GCC 9.2.1 20191008]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
statically imported: /proj/ansible/ansible/bug.yml
ERROR! A malformed block was encountered while loading a block
```
and the same outcome when run with `ANSIBLE_DEBUG=1` which was especially surprising
```console
$ ANSIBLE_DEBUG=1 ansible-playbook -i localhost, -c local -vvvvvvvvv pb.yml
## snip
22539 1585707905.71204: Loading BecomeModule 'sudo' from /proj/ansible/ansible/lib/ansible/plugins/become/sudo.py (found_in_cache=False, class_only=True)
22539 1585707905.71232: Loading data from /proj/ansible/ansible/bug0.yml
22539 1585707905.71591: in VariableManager get_vars()
22539 1585707905.71621: done with get_vars()
22539 1585707905.71634: Loading data from /proj/ansible/ansible/bug00.yml
statically imported: /proj/ansible/ansible/bug.yml
22539 1585707905.71686: RUNNING CLEANUP
ERROR! A malformed block was encountered while loading a block
```
|
https://github.com/ansible/ansible/issues/68605
|
https://github.com/ansible/ansible/pull/72677
|
fb092a82a1a013fd38a37b90b305fc9a8fa11a13
|
46198cf80aa6001d058bc32c00e242d161715dad
| 2020-04-01T02:31:15Z |
python
| 2020-11-19T19:40:22Z |
changelogs/fragments/68605-ansible-error-orig-exc-context.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,605 |
import_task swallows AnsibleParserError
|
##### SUMMARY
A malformed list item in an included task file yields a cryptic message
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/cli/scripts/ansible_cli_stub.py
##### ANSIBLE VERSION
Confirmed as of c888035e
```
ansible 2.10.0.dev0
config file = None
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /proj/ansible/ansible/venv/lib/python3.7/site-packages/ansible_base-2.10.0.dev0-py3.7.egg/ansible
executable location = /proj/ansible/ansible/venv/bin/ansible
python version = 3.7.7 (default, Mar 11 2020, 11:44:20) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### STEPS TO REPRODUCE
* bug.yml
```yaml
---
- oops
```
* pb.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- include_tasks: bug.yml
```
##### EXPECTED RESULTS
```console
$ ansible-playbook -i localhost, -c local pb.yml
ERROR! Traceback (most recent call last):
File "/proj/ansible/ansible/lib/ansible/playbook/block.py", line 130, in _load_block
use_handlers=self._use_handlers,
File "/proj/ansible/ansible/lib/ansible/playbook/helpers.py", line 106, in load_list_of_tasks
raise AnsibleAssertionError('The ds (%s) should be a dict but was a %s' % (ds, type(ds)))
ansible.errors.AnsibleAssertionError: The ds (['oops']) should be a dict but was a <class 'list'>
```
or something similar, because just _eating_ the `AnsibleParserError` makes tracking down the erroneous list item harder than necessary
##### ACTUAL RESULTS
```console
$ ansible-playbook -i localhost, -c local -vvvvvvvvv pb.yml
ansible-playbook 2.10.0.dev0
config file = None
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /proj/ansible/ansible/lib/ansible
executable location = /proj/ansible/ansible/venv/bin/ansible-playbook
python version = 3.7.7 (default, Mar 11 2020, 11:44:20) [GCC 9.2.1 20191008]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
statically imported: /proj/ansible/ansible/bug.yml
ERROR! A malformed block was encountered while loading a block
```
and the same outcome when run with `ANSIBLE_DEBUG=1` which was especially surprising
```console
$ ANSIBLE_DEBUG=1 ansible-playbook -i localhost, -c local -vvvvvvvvv pb.yml
## snip
22539 1585707905.71204: Loading BecomeModule 'sudo' from /proj/ansible/ansible/lib/ansible/plugins/become/sudo.py (found_in_cache=False, class_only=True)
22539 1585707905.71232: Loading data from /proj/ansible/ansible/bug0.yml
22539 1585707905.71591: in VariableManager get_vars()
22539 1585707905.71621: done with get_vars()
22539 1585707905.71634: Loading data from /proj/ansible/ansible/bug00.yml
statically imported: /proj/ansible/ansible/bug.yml
22539 1585707905.71686: RUNNING CLEANUP
ERROR! A malformed block was encountered while loading a block
```
|
https://github.com/ansible/ansible/issues/68605
|
https://github.com/ansible/ansible/pull/72677
|
fb092a82a1a013fd38a37b90b305fc9a8fa11a13
|
46198cf80aa6001d058bc32c00e242d161715dad
| 2020-04-01T02:31:15Z |
python
| 2020-11-19T19:40:22Z |
lib/ansible/errors/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible.errors.yaml_strings import (
YAML_COMMON_DICT_ERROR,
YAML_COMMON_LEADING_TAB_ERROR,
YAML_COMMON_PARTIALLY_QUOTED_LINE_ERROR,
YAML_COMMON_UNBALANCED_QUOTES_ERROR,
YAML_COMMON_UNQUOTED_COLON_ERROR,
YAML_COMMON_UNQUOTED_VARIABLE_ERROR,
YAML_POSITION_DETAILS,
YAML_AND_SHORTHAND_ERROR,
)
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Sequence
class AnsibleError(Exception):
'''
This is the base class for all errors raised from Ansible code,
and can be instantiated with two optional parameters beyond the
error message to control whether detailed information is displayed
when the error occurred while parsing a data file of some kind.
Usage:
raise AnsibleError('some message here', obj=obj, show_content=True)
Where "obj" is some subclass of ansible.parsing.yaml.objects.AnsibleBaseYAMLObject,
which should be returned by the DataLoader() class.
'''
def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None):
super(AnsibleError, self).__init__(message)
self._show_content = show_content
self._suppress_extended_error = suppress_extended_error
self._message = to_native(message)
self.obj = obj
if orig_exc:
self.orig_exc = orig_exc
@property
def message(self):
# we import this here to prevent an import loop problem,
# since the objects code also imports ansible.errors
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject
if isinstance(self.obj, AnsibleBaseYAMLObject):
extended_error = self._get_extended_error()
if extended_error and not self._suppress_extended_error:
return '%s\n\n%s' % (self._message, to_native(extended_error))
return self._message
@message.setter
def message(self, val):
self._message = val
def __str__(self):
return self.message
def __repr__(self):
return self.message
def _get_error_lines_from_file(self, file_name, line_number):
'''
Returns the line in the file which corresponds to the reported error
location, as well as the line preceding it (if the error did not
occur on the first line), to provide context to the error.
'''
target_line = ''
prev_line = ''
with open(file_name, 'r') as f:
lines = f.readlines()
target_line = lines[line_number]
if line_number > 0:
prev_line = lines[line_number - 1]
return (target_line, prev_line)
def _get_extended_error(self):
'''
Given an object reporting the location of the exception in a file, return
detailed information regarding it including:
* the line which caused the error as well as the one preceding it
* causes and suggested remedies for common syntax errors
If this error was created with show_content=False, the reporting of content
is suppressed, as the file contents may be sensitive (ie. vault data).
'''
error_message = ''
try:
(src_file, line_number, col_number) = self.obj.ansible_pos
error_message += YAML_POSITION_DETAILS % (src_file, line_number, col_number)
if src_file not in ('<string>', '<unicode>') and self._show_content:
(target_line, prev_line) = self._get_error_lines_from_file(src_file, line_number - 1)
target_line = to_text(target_line)
prev_line = to_text(prev_line)
if target_line:
stripped_line = target_line.replace(" ", "")
# Check for k=v syntax in addition to YAML syntax and set the appropriate error position,
# arrow index
if re.search(r'\w+(\s+)?=(\s+)?[\w/-]+', prev_line):
error_position = prev_line.rstrip().find('=')
arrow_line = (" " * error_position) + "^ here"
error_message = YAML_POSITION_DETAILS % (src_file, line_number - 1, error_position + 1)
error_message += "\nThe offending line appears to be:\n\n%s\n%s\n\n" % (prev_line.rstrip(), arrow_line)
error_message += YAML_AND_SHORTHAND_ERROR
else:
arrow_line = (" " * (col_number - 1)) + "^ here"
error_message += "\nThe offending line appears to be:\n\n%s\n%s\n%s\n" % (prev_line.rstrip(), target_line.rstrip(), arrow_line)
# TODO: There may be cases where there is a valid tab in a line that has other errors.
if '\t' in target_line:
error_message += YAML_COMMON_LEADING_TAB_ERROR
# common error/remediation checking here:
# check for unquoted vars starting lines
if ('{{' in target_line and '}}' in target_line) and ('"{{' not in target_line or "'{{" not in target_line):
error_message += YAML_COMMON_UNQUOTED_VARIABLE_ERROR
# check for common dictionary mistakes
elif ":{{" in stripped_line and "}}" in stripped_line:
error_message += YAML_COMMON_DICT_ERROR
# check for common unquoted colon mistakes
elif (len(target_line) and
len(target_line) > 1 and
len(target_line) > col_number and
target_line[col_number] == ":" and
target_line.count(':') > 1):
error_message += YAML_COMMON_UNQUOTED_COLON_ERROR
# otherwise, check for some common quoting mistakes
else:
# FIXME: This needs to split on the first ':' to account for modules like lineinfile
# that may have lines that contain legitimate colons, e.g., line: 'i ALL= (ALL) NOPASSWD: ALL'
# and throw off the quote matching logic.
parts = target_line.split(":")
if len(parts) > 1:
middle = parts[1].strip()
match = False
unbalanced = False
if middle.startswith("'") and not middle.endswith("'"):
match = True
elif middle.startswith('"') and not middle.endswith('"'):
match = True
if (len(middle) > 0 and
middle[0] in ['"', "'"] and
middle[-1] in ['"', "'"] and
target_line.count("'") > 2 or
target_line.count('"') > 2):
unbalanced = True
if match:
error_message += YAML_COMMON_PARTIALLY_QUOTED_LINE_ERROR
if unbalanced:
error_message += YAML_COMMON_UNBALANCED_QUOTES_ERROR
except (IOError, TypeError):
error_message += '\n(could not open file to display line)'
except IndexError:
error_message += '\n(specified line no longer in file, maybe it changed?)'
return error_message
class AnsibleAssertionError(AnsibleError, AssertionError):
'''Invalid assertion'''
pass
class AnsibleOptionsError(AnsibleError):
''' bad or incomplete options passed '''
pass
class AnsibleParserError(AnsibleError):
''' something was detected early that is wrong about a playbook or data file '''
pass
class AnsibleInternalError(AnsibleError):
''' internal safeguards tripped, something happened in the code that should never happen '''
pass
class AnsibleRuntimeError(AnsibleError):
''' ansible had a problem while running a playbook '''
pass
class AnsibleModuleError(AnsibleRuntimeError):
''' a module failed somehow '''
pass
class AnsibleConnectionFailure(AnsibleRuntimeError):
''' the transport / connection_plugin had a fatal error '''
pass
class AnsibleAuthenticationFailure(AnsibleConnectionFailure):
'''invalid username/password/key'''
pass
class AnsibleCallbackError(AnsibleRuntimeError):
''' a callback failure '''
pass
class AnsibleTemplateError(AnsibleRuntimeError):
'''A template related error'''
pass
class AnsibleFilterError(AnsibleTemplateError):
''' a templating failure '''
pass
class AnsibleLookupError(AnsibleTemplateError):
''' a lookup failure '''
pass
class AnsibleUndefinedVariable(AnsibleTemplateError):
''' a templating failure '''
pass
class AnsibleFileNotFound(AnsibleRuntimeError):
''' a file missing failure '''
def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, paths=None, file_name=None):
self.file_name = file_name
self.paths = paths
if message:
message += "\n"
if self.file_name:
message += "Could not find or access '%s'" % to_text(self.file_name)
else:
message += "Could not find file"
if self.paths and isinstance(self.paths, Sequence):
searched = to_text('\n\t'.join(self.paths))
if message:
message += "\n"
message += "Searched in:\n\t%s" % searched
message += " on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"
super(AnsibleFileNotFound, self).__init__(message=message, obj=obj, show_content=show_content,
suppress_extended_error=suppress_extended_error, orig_exc=orig_exc)
# These Exceptions are temporary, using them as flow control until we can get a better solution.
# DO NOT USE as they will probably be removed soon.
# We will port the action modules in our tree to use a context manager instead.
class AnsibleAction(AnsibleRuntimeError):
''' Base Exception for Action plugin flow control '''
def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, result=None):
super(AnsibleAction, self).__init__(message=message, obj=obj, show_content=show_content,
suppress_extended_error=suppress_extended_error, orig_exc=orig_exc)
if result is None:
self.result = {}
else:
self.result = result
class AnsibleActionSkip(AnsibleAction):
''' an action runtime skip'''
def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, result=None):
super(AnsibleActionSkip, self).__init__(message=message, obj=obj, show_content=show_content,
suppress_extended_error=suppress_extended_error, orig_exc=orig_exc, result=result)
self.result.update({'skipped': True, 'msg': message})
class AnsibleActionFail(AnsibleAction):
''' an action runtime failure'''
def __init__(self, message="", obj=None, show_content=True, suppress_extended_error=False, orig_exc=None, result=None):
super(AnsibleActionFail, self).__init__(message=message, obj=obj, show_content=show_content,
suppress_extended_error=suppress_extended_error, orig_exc=orig_exc, result=result)
self.result.update({'failed': True, 'msg': message})
class _AnsibleActionDone(AnsibleAction):
''' an action runtime early exit'''
pass
class AnsiblePluginError(AnsibleError):
''' base class for Ansible plugin-related errors that do not need AnsibleError contextual data '''
def __init__(self, message=None, plugin_load_context=None):
super(AnsiblePluginError, self).__init__(message)
self.plugin_load_context = plugin_load_context
class AnsiblePluginRemovedError(AnsiblePluginError):
''' a requested plugin has been removed '''
pass
class AnsiblePluginCircularRedirect(AnsiblePluginError):
'''a cycle was detected in plugin redirection'''
pass
class AnsibleCollectionUnsupportedVersionError(AnsiblePluginError):
'''a collection is not supported by this version of Ansible'''
pass
class AnsibleFilterTypeError(AnsibleTemplateError, TypeError):
''' a Jinja filter templating failure due to bad type'''
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,417 |
unique filter does not respect order
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`unique` filter does not respect the order and gives non-idempotent results
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/filter/mathstuff.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have the following statement in my template
```jinja2
{% for host in url_list | map('urlsplit', 'hostname') | unique %}
upstream {{ host }} {
server {{ host }}:443;
keepalive 20;
keepalive_requests 500;
keepalive_timeout 30s;
}
{% endfor %}
```
which basically extracts unique host names from some url list and declares upstream server for each.
##### EXPECTED RESULTS
The order of hosts is at least consistent between runs.
##### ACTUAL RESULTS
The order of hosts is different for each run. Unexpected handlers are triggered.
|
https://github.com/ansible/ansible/issues/63417
|
https://github.com/ansible/ansible/pull/67856
|
35022e13a839d5f59a1c4a254aca12afb124373a
|
ae08c6a639b492ee5ad24048f40c8a5792eacdcb
| 2019-10-12T13:11:44Z |
python
| 2020-11-23T07:55:18Z |
changelogs/fragments/63417-unique-filter-preserve-order.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,417 |
unique filter does not respect order
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`unique` filter does not respect the order and gives non-idempotent results
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/filter/mathstuff.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have the following statement in my template
```jinja2
{% for host in url_list | map('urlsplit', 'hostname') | unique %}
upstream {{ host }} {
server {{ host }}:443;
keepalive 20;
keepalive_requests 500;
keepalive_timeout 30s;
}
{% endfor %}
```
which basically extracts unique host names from some url list and declares upstream server for each.
##### EXPECTED RESULTS
The order of hosts is at least consistent between runs.
##### ACTUAL RESULTS
The order of hosts is different for each run. Unexpected handlers are triggered.
|
https://github.com/ansible/ansible/issues/63417
|
https://github.com/ansible/ansible/pull/67856
|
35022e13a839d5f59a1c4a254aca12afb124373a
|
ae08c6a639b492ee5ad24048f40c8a5792eacdcb
| 2019-10-12T13:11:44Z |
python
| 2020-11-23T07:55:18Z |
lib/ansible/plugins/filter/mathstuff.py
|
# Copyright 2014, Brian Coca <[email protected]>
# Copyright 2017, Ken Celenza <[email protected]>
# Copyright 2017, Jason Edelman <[email protected]>
# Copyright 2017, Ansible Project
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import itertools
import math
from jinja2.filters import environmentfilter
from ansible.errors import AnsibleFilterError, AnsibleFilterTypeError
from ansible.module_utils.common.text import formatters
from ansible.module_utils.six import binary_type, text_type
from ansible.module_utils.six.moves import zip, zip_longest
from ansible.module_utils.common._collections_compat import Hashable, Mapping, Iterable
from ansible.module_utils._text import to_native, to_text
from ansible.utils.display import Display
try:
from jinja2.filters import do_unique
HAS_UNIQUE = True
except ImportError:
HAS_UNIQUE = False
try:
from jinja2.filters import do_max, do_min
HAS_MIN_MAX = True
except ImportError:
HAS_MIN_MAX = False
display = Display()
@environmentfilter
def unique(environment, a, case_sensitive=False, attribute=None):
def _do_fail(e):
if case_sensitive or attribute:
raise AnsibleFilterError("Jinja2's unique filter failed and we cannot fall back to Ansible's version "
"as it does not support the parameters supplied", orig_exc=e)
error = e = None
try:
if HAS_UNIQUE:
c = do_unique(environment, a, case_sensitive=case_sensitive, attribute=attribute)
if isinstance(a, Hashable):
c = set(c)
else:
c = list(c)
except TypeError as e:
error = e
_do_fail(e)
except Exception as e:
error = e
_do_fail(e)
display.warning('Falling back to Ansible unique filter as Jinja2 one failed: %s' % to_text(e))
if not HAS_UNIQUE or error:
# handle Jinja2 specific attributes when using Ansible's version
if case_sensitive or attribute:
raise AnsibleFilterError("Ansible's unique filter does not support case_sensitive nor attribute parameters, "
"you need a newer version of Jinja2 that provides their version of the filter.")
if isinstance(a, Hashable):
c = set(a)
else:
c = []
for x in a:
if x not in c:
c.append(x)
return c
@environmentfilter
def intersect(environment, a, b):
if isinstance(a, Hashable) and isinstance(b, Hashable):
c = set(a) & set(b)
else:
c = unique(environment, [x for x in a if x in b])
return c
@environmentfilter
def difference(environment, a, b):
if isinstance(a, Hashable) and isinstance(b, Hashable):
c = set(a) - set(b)
else:
c = unique(environment, [x for x in a if x not in b])
return c
@environmentfilter
def symmetric_difference(environment, a, b):
if isinstance(a, Hashable) and isinstance(b, Hashable):
c = set(a) ^ set(b)
else:
isect = intersect(environment, a, b)
c = [x for x in union(environment, a, b) if x not in isect]
return c
@environmentfilter
def union(environment, a, b):
if isinstance(a, Hashable) and isinstance(b, Hashable):
c = set(a) | set(b)
else:
c = unique(environment, a + b)
return c
@environmentfilter
def min(environment, a, **kwargs):
if HAS_MIN_MAX:
return do_min(environment, a, **kwargs)
else:
if kwargs:
raise AnsibleFilterError("Ansible's min filter does not support any keyword arguments. "
"You need Jinja2 2.10 or later that provides their version of the filter.")
_min = __builtins__.get('min')
return _min(a)
@environmentfilter
def max(environment, a, **kwargs):
if HAS_MIN_MAX:
return do_max(environment, a, **kwargs)
else:
if kwargs:
raise AnsibleFilterError("Ansible's max filter does not support any keyword arguments. "
"You need Jinja2 2.10 or later that provides their version of the filter.")
_max = __builtins__.get('max')
return _max(a)
def logarithm(x, base=math.e):
try:
if base == 10:
return math.log10(x)
else:
return math.log(x, base)
except TypeError as e:
raise AnsibleFilterTypeError('log() can only be used on numbers: %s' % to_native(e))
def power(x, y):
try:
return math.pow(x, y)
except TypeError as e:
raise AnsibleFilterTypeError('pow() can only be used on numbers: %s' % to_native(e))
def inversepower(x, base=2):
try:
if base == 2:
return math.sqrt(x)
else:
return math.pow(x, 1.0 / float(base))
except (ValueError, TypeError) as e:
raise AnsibleFilterTypeError('root() can only be used on numbers: %s' % to_native(e))
def human_readable(size, isbits=False, unit=None):
''' Return a human readable string '''
try:
return formatters.bytes_to_human(size, isbits, unit)
except TypeError as e:
raise AnsibleFilterTypeError("human_readable() failed on bad input: %s" % to_native(e))
except Exception:
raise AnsibleFilterError("human_readable() can't interpret following string: %s" % size)
def human_to_bytes(size, default_unit=None, isbits=False):
''' Return bytes count from a human readable string '''
try:
return formatters.human_to_bytes(size, default_unit, isbits)
except TypeError as e:
raise AnsibleFilterTypeError("human_to_bytes() failed on bad input: %s" % to_native(e))
except Exception:
raise AnsibleFilterError("human_to_bytes() can't interpret following string: %s" % size)
def rekey_on_member(data, key, duplicates='error'):
"""
Rekey a dict of dicts on another member
May also create a dict from a list of dicts.
duplicates can be one of ``error`` or ``overwrite`` to specify whether to error out if the key
value would be duplicated or to overwrite previous entries if that's the case.
"""
if duplicates not in ('error', 'overwrite'):
raise AnsibleFilterError("duplicates parameter to rekey_on_member has unknown value: {0}".format(duplicates))
new_obj = {}
if isinstance(data, Mapping):
iterate_over = data.values()
elif isinstance(data, Iterable) and not isinstance(data, (text_type, binary_type)):
iterate_over = data
else:
raise AnsibleFilterTypeError("Type is not a valid list, set, or dict")
for item in iterate_over:
if not isinstance(item, Mapping):
raise AnsibleFilterTypeError("List item is not a valid dict")
try:
key_elem = item[key]
except KeyError:
raise AnsibleFilterError("Key {0} was not found".format(key))
except TypeError as e:
raise AnsibleFilterTypeError(to_native(e))
except Exception as e:
raise AnsibleFilterError(to_native(e))
# Note: if new_obj[key_elem] exists it will always be a non-empty dict (it will at
# minimum contain {key: key_elem}
if new_obj.get(key_elem, None):
if duplicates == 'error':
raise AnsibleFilterError("Key {0} is not unique, cannot correctly turn into dict".format(key_elem))
elif duplicates == 'overwrite':
new_obj[key_elem] = item
else:
new_obj[key_elem] = item
return new_obj
class FilterModule(object):
''' Ansible math jinja2 filters '''
def filters(self):
filters = {
# general math
'min': min,
'max': max,
# exponents and logarithms
'log': logarithm,
'pow': power,
'root': inversepower,
# set theory
'unique': unique,
'intersect': intersect,
'difference': difference,
'symmetric_difference': symmetric_difference,
'union': union,
# combinatorial
'product': itertools.product,
'permutations': itertools.permutations,
'combinations': itertools.combinations,
# computer theory
'human_readable': human_readable,
'human_to_bytes': human_to_bytes,
'rekey_on_member': rekey_on_member,
# zip
'zip': zip,
'zip_longest': zip_longest,
}
return filters
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,417 |
unique filter does not respect order
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`unique` filter does not respect the order and gives non-idempotent results
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/filter/mathstuff.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have the following statement in my template
```jinja2
{% for host in url_list | map('urlsplit', 'hostname') | unique %}
upstream {{ host }} {
server {{ host }}:443;
keepalive 20;
keepalive_requests 500;
keepalive_timeout 30s;
}
{% endfor %}
```
which basically extracts unique host names from some url list and declares upstream server for each.
##### EXPECTED RESULTS
The order of hosts is at least consistent between runs.
##### ACTUAL RESULTS
The order of hosts is different for each run. Unexpected handlers are triggered.
|
https://github.com/ansible/ansible/issues/63417
|
https://github.com/ansible/ansible/pull/67856
|
35022e13a839d5f59a1c4a254aca12afb124373a
|
ae08c6a639b492ee5ad24048f40c8a5792eacdcb
| 2019-10-12T13:11:44Z |
python
| 2020-11-23T07:55:18Z |
test/units/plugins/filter/test_mathstuff.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from jinja2 import Environment
import ansible.plugins.filter.mathstuff as ms
from ansible.errors import AnsibleFilterError, AnsibleFilterTypeError
UNIQUE_DATA = (([1, 3, 4, 2], sorted([1, 2, 3, 4])),
([1, 3, 2, 4, 2, 3], sorted([1, 2, 3, 4])),
(['a', 'b', 'c', 'd'], sorted(['a', 'b', 'c', 'd'])),
(['a', 'a', 'd', 'b', 'a', 'd', 'c', 'b'], sorted(['a', 'b', 'c', 'd'])),
)
TWO_SETS_DATA = (([1, 2], [3, 4], ([], sorted([1, 2]), sorted([1, 2, 3, 4]), sorted([1, 2, 3, 4]))),
([1, 2, 3], [5, 3, 4], ([3], sorted([1, 2]), sorted([1, 2, 5, 4]), sorted([1, 2, 3, 4, 5]))),
(['a', 'b', 'c'], ['d', 'c', 'e'], (['c'], sorted(['a', 'b']), sorted(['a', 'b', 'd', 'e']), sorted(['a', 'b', 'c', 'e', 'd']))),
)
env = Environment()
@pytest.mark.parametrize('data, expected', UNIQUE_DATA)
class TestUnique:
def test_unhashable(self, data, expected):
assert sorted(ms.unique(env, list(data))) == expected
def test_hashable(self, data, expected):
assert sorted(ms.unique(env, tuple(data))) == expected
@pytest.mark.parametrize('dataset1, dataset2, expected', TWO_SETS_DATA)
class TestIntersect:
def test_unhashable(self, dataset1, dataset2, expected):
assert sorted(ms.intersect(env, list(dataset1), list(dataset2))) == expected[0]
def test_hashable(self, dataset1, dataset2, expected):
assert sorted(ms.intersect(env, tuple(dataset1), tuple(dataset2))) == expected[0]
@pytest.mark.parametrize('dataset1, dataset2, expected', TWO_SETS_DATA)
class TestDifference:
def test_unhashable(self, dataset1, dataset2, expected):
assert sorted(ms.difference(env, list(dataset1), list(dataset2))) == expected[1]
def test_hashable(self, dataset1, dataset2, expected):
assert sorted(ms.difference(env, tuple(dataset1), tuple(dataset2))) == expected[1]
@pytest.mark.parametrize('dataset1, dataset2, expected', TWO_SETS_DATA)
class TestSymmetricDifference:
def test_unhashable(self, dataset1, dataset2, expected):
assert sorted(ms.symmetric_difference(env, list(dataset1), list(dataset2))) == expected[2]
def test_hashable(self, dataset1, dataset2, expected):
assert sorted(ms.symmetric_difference(env, tuple(dataset1), tuple(dataset2))) == expected[2]
class TestMin:
def test_min(self):
assert ms.min(env, (1, 2)) == 1
assert ms.min(env, (2, 1)) == 1
assert ms.min(env, ('p', 'a', 'w', 'b', 'p')) == 'a'
assert ms.min(env, ({'key': 'a'}, {'key': 'b'}, {'key': 'c'}), attribute='key') == {'key': 'a'}
assert ms.min(env, ({'key': 1}, {'key': 2}, {'key': 3}), attribute='key') == {'key': 1}
assert ms.min(env, ('a', 'A', 'b', 'B'), case_sensitive=True) == 'A'
class TestMax:
def test_max(self):
assert ms.max(env, (1, 2)) == 2
assert ms.max(env, (2, 1)) == 2
assert ms.max(env, ('p', 'a', 'w', 'b', 'p')) == 'w'
assert ms.max(env, ({'key': 'a'}, {'key': 'b'}, {'key': 'c'}), attribute='key') == {'key': 'c'}
assert ms.max(env, ({'key': 1}, {'key': 2}, {'key': 3}), attribute='key') == {'key': 3}
assert ms.max(env, ('a', 'A', 'b', 'B'), case_sensitive=True) == 'b'
class TestLogarithm:
def test_log_non_number(self):
# Message changed in python3.6
with pytest.raises(AnsibleFilterTypeError, match='log\\(\\) can only be used on numbers: (a float is required|must be real number, not str)'):
ms.logarithm('a')
with pytest.raises(AnsibleFilterTypeError, match='log\\(\\) can only be used on numbers: (a float is required|must be real number, not str)'):
ms.logarithm(10, base='a')
def test_log_ten(self):
assert ms.logarithm(10, 10) == 1.0
assert ms.logarithm(69, 10) * 1000 // 1 == 1838
def test_log_natural(self):
assert ms.logarithm(69) * 1000 // 1 == 4234
def test_log_two(self):
assert ms.logarithm(69, 2) * 1000 // 1 == 6108
class TestPower:
def test_power_non_number(self):
# Message changed in python3.6
with pytest.raises(AnsibleFilterTypeError, match='pow\\(\\) can only be used on numbers: (a float is required|must be real number, not str)'):
ms.power('a', 10)
with pytest.raises(AnsibleFilterTypeError, match='pow\\(\\) can only be used on numbers: (a float is required|must be real number, not str)'):
ms.power(10, 'a')
def test_power_squared(self):
assert ms.power(10, 2) == 100
def test_power_cubed(self):
assert ms.power(10, 3) == 1000
class TestInversePower:
def test_root_non_number(self):
# Messages differed in python-2.6, python-2.7-3.5, and python-3.6+
with pytest.raises(AnsibleFilterTypeError, match="root\\(\\) can only be used on numbers:"
" (invalid literal for float\\(\\): a"
"|could not convert string to float: a"
"|could not convert string to float: 'a')"):
ms.inversepower(10, 'a')
with pytest.raises(AnsibleFilterTypeError, match="root\\(\\) can only be used on numbers: (a float is required|must be real number, not str)"):
ms.inversepower('a', 10)
def test_square_root(self):
assert ms.inversepower(100) == 10
assert ms.inversepower(100, 2) == 10
def test_cube_root(self):
assert ms.inversepower(27, 3) == 3
class TestRekeyOnMember():
# (Input data structure, member to rekey on, expected return)
VALID_ENTRIES = (
([{"proto": "eigrp", "state": "enabled"}, {"proto": "ospf", "state": "enabled"}],
'proto',
{'eigrp': {'state': 'enabled', 'proto': 'eigrp'}, 'ospf': {'state': 'enabled', 'proto': 'ospf'}}),
({'eigrp': {"proto": "eigrp", "state": "enabled"}, 'ospf': {"proto": "ospf", "state": "enabled"}},
'proto',
{'eigrp': {'state': 'enabled', 'proto': 'eigrp'}, 'ospf': {'state': 'enabled', 'proto': 'ospf'}}),
)
# (Input data structure, member to rekey on, expected error message)
INVALID_ENTRIES = (
# Fail when key is not found
(AnsibleFilterError, [{"proto": "eigrp", "state": "enabled"}], 'invalid_key', "Key invalid_key was not found"),
(AnsibleFilterError, {"eigrp": {"proto": "eigrp", "state": "enabled"}}, 'invalid_key', "Key invalid_key was not found"),
# Fail when key is duplicated
(AnsibleFilterError, [{"proto": "eigrp"}, {"proto": "ospf"}, {"proto": "ospf"}],
'proto', 'Key ospf is not unique, cannot correctly turn into dict'),
# Fail when value is not a dict
(AnsibleFilterTypeError, ["string"], 'proto', "List item is not a valid dict"),
(AnsibleFilterTypeError, [123], 'proto', "List item is not a valid dict"),
(AnsibleFilterTypeError, [[{'proto': 1}]], 'proto', "List item is not a valid dict"),
# Fail when we do not send a dict or list
(AnsibleFilterTypeError, "string", 'proto', "Type is not a valid list, set, or dict"),
(AnsibleFilterTypeError, 123, 'proto', "Type is not a valid list, set, or dict"),
)
@pytest.mark.parametrize("list_original, key, expected", VALID_ENTRIES)
def test_rekey_on_member_success(self, list_original, key, expected):
assert ms.rekey_on_member(list_original, key) == expected
@pytest.mark.parametrize("expected_exception_type, list_original, key, expected", INVALID_ENTRIES)
def test_fail_rekey_on_member(self, expected_exception_type, list_original, key, expected):
with pytest.raises(expected_exception_type) as err:
ms.rekey_on_member(list_original, key)
assert err.value.message == expected
def test_duplicate_strategy_overwrite(self):
list_original = ({'proto': 'eigrp', 'id': 1}, {'proto': 'ospf', 'id': 2}, {'proto': 'eigrp', 'id': 3})
expected = {'eigrp': {'proto': 'eigrp', 'id': 3}, 'ospf': {'proto': 'ospf', 'id': 2}}
assert ms.rekey_on_member(list_original, 'proto', duplicates='overwrite') == expected
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,701 |
Example output missing on Getting Started doc
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
The `echo` example in Getting Started does not include the output.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
intro_getting_started.rst
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72701
|
https://github.com/ansible/ansible/pull/72718
|
0fa1cd88ce57167292803e9257fe2ed50cb3e1e0
|
74196577a3295a98dc610306ef43879f4f36e6e7
| 2020-11-20T21:59:32Z |
python
| 2020-11-30T14:59:27Z |
docs/docsite/rst/user_guide/intro_getting_started.rst
|
.. _intro_getting_started:
***************
Getting Started
***************
Now that you have read the :ref:`installation guide<installation_guide>` and installed Ansible on a control node, you are ready to learn how Ansible works. A basic Ansible command or playbook:
* selects machines to execute against from inventory
* connects to those machines (or network devices, or other managed nodes), usually over SSH
* copies one or more modules to the remote machines and starts execution there
Ansible can do much more, but you should understand the most common use case before exploring all the powerful configuration, deployment, and orchestration features of Ansible. This page illustrates the basic process with a simple inventory and an ad-hoc command. Once you understand how Ansible works, you can read more details about :ref:`ad-hoc commands<intro_adhoc>`, organize your infrastructure with :ref:`inventory<intro_inventory>`, and harness the full power of Ansible with :ref:`playbooks<playbooks_intro>`.
.. contents::
:local:
Selecting machines from inventory
=================================
Ansible reads information about which machines you want to manage from your inventory. Although you can pass an IP address to an ad-hoc command, you need inventory to take advantage of the full flexibility and repeatability of Ansible.
Action: create a basic inventory
--------------------------------
For this basic inventory, edit (or create) ``/etc/ansible/hosts`` and add a few remote systems to it. For this example, use either IP addresses or FQDNs:
.. code-block:: text
192.0.2.50
aserver.example.org
bserver.example.org
Beyond the basics
-----------------
Your inventory can store much more than IPs and FQDNs. You can create :ref:`aliases<inventory_aliases>`, set variable values for a single host with :ref:`host vars<host_variables>`, or set variable values for multiple hosts with :ref:`group vars<group_variables>`.
.. _remote_connection_information:
Connecting to remote nodes
==========================
Ansible communicates with remote machines over the `SSH protocol <https://www.ssh.com/ssh/protocol/>`_. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
Action: check your SSH connections
----------------------------------
Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the ``authorized_keys`` file on those systems.
Beyond the basics
-----------------
You can override the default remote user name in several ways, including:
- passing the ``-u`` parameter at the command line
- setting user information in your inventory file
- setting user information in your configuration file
- setting environment variables
See :ref:`general_precedence_rules` for details on the (sometimes unintuitive) precedence of each method of passing user information. You can read more about connections in :ref:`connections`.
Copying and executing modules
=============================
Once it has connected, Ansible transfers the modules required by your command or playbook to the remote machine(s) for execution.
Action: run your first Ansible commands
---------------------------------------
Use the ping module to ping all the nodes in your inventory:
.. code-block:: bash
$ ansible all -m ping
You should see output for each host in your inventory, similar to this:
.. code-block:: ansible-output
aserver.example.org | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
Now run a live command on all of your nodes:
.. code-block:: bash
$ ansible all -a "/bin/echo hello"
Beyond the basics
-----------------
By default Ansible uses SFTP to transfer files. If the machine or device you want to manage does not support SFTP, you can switch to SCP mode in :ref:`intro_configuration`. The files are placed in a temporary directory and executed from there.
If you need privilege escalation (sudo and similar) to run a command, pass the ``become`` flags:
.. code-block:: bash
# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root (sudo is default method)
$ ansible all -m ping -u bruce --become
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --become --become-user batman
You can read more about privilege escalation in :ref:`become`.
Congratulations! You have contacted your nodes using Ansible. You used a basic inventory file and an ad-hoc command to direct Ansible to connect to specific remote nodes, copy a module file there and execute it, and return output. You have a fully working infrastructure.
Resources
=================================
- `Product Demos <https://github.com/ansible/product-demos>`_
- `Katakoda <https://katacoda.com/rhel-labs>`_
- `Workshops <https://github.com/ansible/workshops>`_
- `Ansible Examples <https://github.com/ansible/ansible-examples>`_
- `Ansible Baseline <https://github.com/ansible/ansible-baseline>`_
Next steps
==========
Next you can read about more real-world cases in :ref:`intro_adhoc`,
explore what you can do with different modules, or read about the Ansible
:ref:`working_with_playbooks` language. Ansible is not just about running commands, it
also has powerful configuration management and deployment features.
.. seealso::
:ref:`intro_inventory`
More information about inventory
:ref:`intro_adhoc`
Examples of basic commands
:ref:`working_with_playbooks`
Learning Ansible's configuration management language
`Ansible Demos <https://github.com/ansible/product-demos>`_
Demonstrations of different Ansible usecases
`RHEL Labs <https://katacoda.com/rhel-labs>`_
Labs to provide further knowledge on different topics
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,398 |
ansible-test run sanity test failed with UnicodeDecodeError
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
i follow this tutorial https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#environment-setup to run ansible-test sanity -v --docker --python 2.7 my_test my_test which module is mentioned in the above documentation. And i failed with following error message.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-test sanity
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.10.0.dev0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/lib/ansible
executable location = /root/ansible/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
follow this tutorial https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#environment-setup to run ansible-test sanity -v --docker --python 2.7 my_test my_test which module is mentioned in the above documentation.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
the sanity test will just pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Running sanity test 'bin-symlinks'
Read 8765 sanity test ignore line(s) for Ansible from: test/sanity/ignore.txt
pre:changelogs/fragments/
path:test/integration/targets/ansible/ansible-testé.cfg
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 28, in <module>
main()
File "/root/ansible/bin/ansible-test", line 24, in main
cli_main()
File "/root/ansible/test/lib/ansible_test/_internal/cli.py", line 168, in main
args.func(config)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 165, in command_sanity
settings = test.load_processor(args)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 876, in load_processor
return SanityIgnoreProcessor(args, self, None)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 450, in __init__
self.parser = SanityIgnoreParser.load(args)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 428, in load
SanityIgnoreParser.instance = SanityIgnoreParser(args)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 280, in __init__
paths_by_test[test.name] = set(target.path for target in test.filter_targets(test_targets))
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 760, in filter_targets
target.path.startswith(pre)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 45: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/68398
|
https://github.com/ansible/ansible/pull/72623
|
221c50b57c347d6f8382523c48e869cb21b8c010
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
| 2020-03-23T02:37:06Z |
python
| 2020-12-04T17:12:14Z |
changelogs/fragments/72623-ansible-test-unicode-paths.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,398 |
ansible-test run sanity test failed with UnicodeDecodeError
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
i follow this tutorial https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#environment-setup to run ansible-test sanity -v --docker --python 2.7 my_test my_test which module is mentioned in the above documentation. And i failed with following error message.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-test sanity
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.10.0.dev0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/lib/ansible
executable location = /root/ansible/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or
trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
follow this tutorial https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#environment-setup to run ansible-test sanity -v --docker --python 2.7 my_test my_test which module is mentioned in the above documentation.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
the sanity test will just pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Running sanity test 'bin-symlinks'
Read 8765 sanity test ignore line(s) for Ansible from: test/sanity/ignore.txt
pre:changelogs/fragments/
path:test/integration/targets/ansible/ansible-testé.cfg
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 28, in <module>
main()
File "/root/ansible/bin/ansible-test", line 24, in main
cli_main()
File "/root/ansible/test/lib/ansible_test/_internal/cli.py", line 168, in main
args.func(config)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 165, in command_sanity
settings = test.load_processor(args)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 876, in load_processor
return SanityIgnoreProcessor(args, self, None)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 450, in __init__
self.parser = SanityIgnoreParser.load(args)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 428, in load
SanityIgnoreParser.instance = SanityIgnoreParser(args)
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 280, in __init__
paths_by_test[test.name] = set(target.path for target in test.filter_targets(test_targets))
File "/root/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 760, in filter_targets
target.path.startswith(pre)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 45: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/68398
|
https://github.com/ansible/ansible/pull/72623
|
221c50b57c347d6f8382523c48e869cb21b8c010
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
| 2020-03-23T02:37:06Z |
python
| 2020-12-04T17:12:14Z |
test/lib/ansible_test/_internal/target.py
|
"""Test target identification, iteration and inclusion/exclusion."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import os
import re
import itertools
import abc
from . import types as t
from .encoding import (
to_bytes,
)
from .io import (
read_text_file,
)
from .util import (
ApplicationError,
display,
read_lines_without_comments,
is_subdir,
)
from .data import (
data_context,
)
MODULE_EXTENSIONS = '.py', '.ps1'
try:
TCompletionTarget = t.TypeVar('TCompletionTarget', bound='CompletionTarget')
except AttributeError:
TCompletionTarget = None # pylint: disable=invalid-name
try:
TIntegrationTarget = t.TypeVar('TIntegrationTarget', bound='IntegrationTarget')
except AttributeError:
TIntegrationTarget = None # pylint: disable=invalid-name
def find_target_completion(target_func, prefix):
"""
:type target_func: () -> collections.Iterable[CompletionTarget]
:type prefix: unicode
:rtype: list[str]
"""
try:
targets = target_func()
short = os.environ.get('COMP_TYPE') == '63' # double tab completion from bash
matches = walk_completion_targets(targets, prefix, short)
return matches
except Exception as ex: # pylint: disable=locally-disabled, broad-except
return [u'%s' % ex]
def walk_completion_targets(targets, prefix, short=False):
"""
:type targets: collections.Iterable[CompletionTarget]
:type prefix: str
:type short: bool
:rtype: tuple[str]
"""
aliases = set(alias for target in targets for alias in target.aliases)
if prefix.endswith('/') and prefix in aliases:
aliases.remove(prefix)
matches = [alias for alias in aliases if alias.startswith(prefix) and '/' not in alias[len(prefix):-1]]
if short:
offset = len(os.path.dirname(prefix))
if offset:
offset += 1
relative_matches = [match[offset:] for match in matches if len(match) > offset]
if len(relative_matches) > 1:
matches = relative_matches
return tuple(sorted(matches))
def walk_internal_targets(targets, includes=None, excludes=None, requires=None):
"""
:type targets: collections.Iterable[T <= CompletionTarget]
:type includes: list[str]
:type excludes: list[str]
:type requires: list[str]
:rtype: tuple[T <= CompletionTarget]
"""
targets = tuple(targets)
include_targets = sorted(filter_targets(targets, includes, errors=True, directories=False), key=lambda include_target: include_target.name)
if requires:
require_targets = set(filter_targets(targets, requires, errors=True, directories=False))
include_targets = [require_target for require_target in include_targets if require_target in require_targets]
if excludes:
list(filter_targets(targets, excludes, errors=True, include=False, directories=False))
internal_targets = set(filter_targets(include_targets, excludes, errors=False, include=False, directories=False))
return tuple(sorted(internal_targets, key=lambda sort_target: sort_target.name))
def filter_targets(targets, # type: t.Iterable[TCompletionTarget]
patterns, # type: t.List[str]
include=True, # type: bool
directories=True, # type: bool
errors=True, # type: bool
): # type: (...) -> t.Iterable[TCompletionTarget]
"""Iterate over the given targets and filter them based on the supplied arguments."""
unmatched = set(patterns or ())
compiled_patterns = dict((p, re.compile('^%s$' % p)) for p in patterns) if patterns else None
for target in targets:
matched_directories = set()
match = False
if patterns:
for alias in target.aliases:
for pattern in patterns:
if compiled_patterns[pattern].match(alias):
match = True
try:
unmatched.remove(pattern)
except KeyError:
pass
if alias.endswith('/'):
if target.base_path and len(target.base_path) > len(alias):
matched_directories.add(target.base_path)
else:
matched_directories.add(alias)
elif include:
match = True
if not target.base_path:
matched_directories.add('.')
for alias in target.aliases:
if alias.endswith('/'):
if target.base_path and len(target.base_path) > len(alias):
matched_directories.add(target.base_path)
else:
matched_directories.add(alias)
if match != include:
continue
if directories and matched_directories:
yield DirectoryTarget(sorted(matched_directories, key=len)[0], target.modules)
else:
yield target
if errors:
if unmatched:
raise TargetPatternsNotMatched(unmatched)
def walk_module_targets():
"""
:rtype: collections.Iterable[TestTarget]
"""
for target in walk_test_targets(path=data_context().content.module_path, module_path=data_context().content.module_path, extensions=MODULE_EXTENSIONS):
if not target.module:
continue
yield target
def walk_units_targets():
"""
:rtype: collections.Iterable[TestTarget]
"""
return walk_test_targets(path=data_context().content.unit_path, module_path=data_context().content.unit_module_path, extensions=('.py',), prefix='test_')
def walk_compile_targets(include_symlinks=True):
"""
:type include_symlinks: bool
:rtype: collections.Iterable[TestTarget]
"""
return walk_test_targets(module_path=data_context().content.module_path, extensions=('.py',), extra_dirs=('bin',), include_symlinks=include_symlinks)
def walk_powershell_targets(include_symlinks=True):
"""
:rtype: collections.Iterable[TestTarget]
"""
return walk_test_targets(module_path=data_context().content.module_path, extensions=('.ps1', '.psm1'), include_symlinks=include_symlinks)
def walk_sanity_targets():
"""
:rtype: collections.Iterable[TestTarget]
"""
return walk_test_targets(module_path=data_context().content.module_path, include_symlinks=True, include_symlinked_directories=True)
def walk_posix_integration_targets(include_hidden=False):
"""
:type include_hidden: bool
:rtype: collections.Iterable[IntegrationTarget]
"""
for target in walk_integration_targets():
if 'posix/' in target.aliases or (include_hidden and 'hidden/posix/' in target.aliases):
yield target
def walk_network_integration_targets(include_hidden=False):
"""
:type include_hidden: bool
:rtype: collections.Iterable[IntegrationTarget]
"""
for target in walk_integration_targets():
if 'network/' in target.aliases or (include_hidden and 'hidden/network/' in target.aliases):
yield target
def walk_windows_integration_targets(include_hidden=False):
"""
:type include_hidden: bool
:rtype: collections.Iterable[IntegrationTarget]
"""
for target in walk_integration_targets():
if 'windows/' in target.aliases or (include_hidden and 'hidden/windows/' in target.aliases):
yield target
def walk_integration_targets():
"""
:rtype: collections.Iterable[IntegrationTarget]
"""
path = data_context().content.integration_targets_path
modules = frozenset(target.module for target in walk_module_targets())
paths = data_context().content.walk_files(path)
prefixes = load_integration_prefixes()
targets_path_tuple = tuple(path.split(os.path.sep))
entry_dirs = (
'defaults',
'files',
'handlers',
'meta',
'tasks',
'templates',
'vars',
)
entry_files = (
'main.yml',
'main.yaml',
)
entry_points = []
for entry_dir in entry_dirs:
for entry_file in entry_files:
entry_points.append(os.path.join(os.path.sep, entry_dir, entry_file))
# any directory with at least one file is a target
path_tuples = set(tuple(os.path.dirname(p).split(os.path.sep))
for p in paths)
# also detect targets which are ansible roles, looking for standard entry points
path_tuples.update(tuple(os.path.dirname(os.path.dirname(p)).split(os.path.sep))
for p in paths if any(p.endswith(entry_point) for entry_point in entry_points))
# remove the top-level directory if it was included
if targets_path_tuple in path_tuples:
path_tuples.remove(targets_path_tuple)
previous_path_tuple = None
paths = []
for path_tuple in sorted(path_tuples):
if previous_path_tuple and previous_path_tuple == path_tuple[:len(previous_path_tuple)]:
# ignore nested directories
continue
previous_path_tuple = path_tuple
paths.append(os.path.sep.join(path_tuple))
for path in paths:
yield IntegrationTarget(path, modules, prefixes)
def load_integration_prefixes():
"""
:rtype: dict[str, str]
"""
path = data_context().content.integration_path
file_paths = sorted(f for f in data_context().content.get_files(path) if os.path.splitext(os.path.basename(f))[0] == 'target-prefixes')
prefixes = {}
for file_path in file_paths:
prefix = os.path.splitext(file_path)[1][1:]
prefixes.update(dict((k, prefix) for k in read_text_file(file_path).splitlines()))
return prefixes
def walk_test_targets(path=None, module_path=None, extensions=None, prefix=None, extra_dirs=None, include_symlinks=False, include_symlinked_directories=False):
"""
:type path: str | None
:type module_path: str | None
:type extensions: tuple[str] | None
:type prefix: str | None
:type extra_dirs: tuple[str] | None
:type include_symlinks: bool
:type include_symlinked_directories: bool
:rtype: collections.Iterable[TestTarget]
"""
if path:
file_paths = data_context().content.walk_files(path, include_symlinked_directories=include_symlinked_directories)
else:
file_paths = data_context().content.all_files(include_symlinked_directories=include_symlinked_directories)
for file_path in file_paths:
name, ext = os.path.splitext(os.path.basename(file_path))
if extensions and ext not in extensions:
continue
if prefix and not name.startswith(prefix):
continue
symlink = os.path.islink(to_bytes(file_path.rstrip(os.path.sep)))
if symlink and not include_symlinks:
continue
yield TestTarget(file_path, module_path, prefix, path, symlink)
file_paths = []
if extra_dirs:
for extra_dir in extra_dirs:
for file_path in data_context().content.get_files(extra_dir):
file_paths.append(file_path)
for file_path in file_paths:
symlink = os.path.islink(to_bytes(file_path.rstrip(os.path.sep)))
if symlink and not include_symlinks:
continue
yield TestTarget(file_path, module_path, prefix, path, symlink)
def analyze_integration_target_dependencies(integration_targets):
"""
:type integration_targets: list[IntegrationTarget]
:rtype: dict[str,set[str]]
"""
real_target_root = os.path.realpath(data_context().content.integration_targets_path) + '/'
role_targets = [target for target in integration_targets if target.type == 'role']
hidden_role_target_names = set(target.name for target in role_targets if 'hidden/' in target.aliases)
dependencies = collections.defaultdict(set)
# handle setup dependencies
for target in integration_targets:
for setup_target_name in target.setup_always + target.setup_once:
dependencies[setup_target_name].add(target.name)
# handle target dependencies
for target in integration_targets:
for need_target in target.needs_target:
dependencies[need_target].add(target.name)
# handle symlink dependencies between targets
# this use case is supported, but discouraged
for target in integration_targets:
for path in data_context().content.walk_files(target.path):
if not os.path.islink(to_bytes(path.rstrip(os.path.sep))):
continue
real_link_path = os.path.realpath(path)
if not real_link_path.startswith(real_target_root):
continue
link_target = real_link_path[len(real_target_root):].split('/')[0]
if link_target == target.name:
continue
dependencies[link_target].add(target.name)
# intentionally primitive analysis of role meta to avoid a dependency on pyyaml
# script based targets are scanned as they may execute a playbook with role dependencies
for target in integration_targets:
meta_dir = os.path.join(target.path, 'meta')
if not os.path.isdir(meta_dir):
continue
meta_paths = data_context().content.get_files(meta_dir)
for meta_path in meta_paths:
if os.path.exists(meta_path):
# try and decode the file as a utf-8 string, skip if it contains invalid chars (binary file)
try:
meta_lines = read_text_file(meta_path).splitlines()
except UnicodeDecodeError:
continue
for meta_line in meta_lines:
if re.search(r'^ *#.*$', meta_line):
continue
if not meta_line.strip():
continue
for hidden_target_name in hidden_role_target_names:
if hidden_target_name in meta_line:
dependencies[hidden_target_name].add(target.name)
while True:
changes = 0
for dummy, dependent_target_names in dependencies.items():
for dependent_target_name in list(dependent_target_names):
new_target_names = dependencies.get(dependent_target_name)
if new_target_names:
for new_target_name in new_target_names:
if new_target_name not in dependent_target_names:
dependent_target_names.add(new_target_name)
changes += 1
if not changes:
break
for target_name in sorted(dependencies):
consumers = dependencies[target_name]
if not consumers:
continue
display.info('%s:' % target_name, verbosity=4)
for consumer in sorted(consumers):
display.info(' %s' % consumer, verbosity=4)
return dependencies
class CompletionTarget:
"""Command-line argument completion target base class."""
__metaclass__ = abc.ABCMeta
def __init__(self):
self.name = None
self.path = None
self.base_path = None
self.modules = tuple()
self.aliases = tuple()
def __eq__(self, other):
if isinstance(other, CompletionTarget):
return self.__repr__() == other.__repr__()
return False
def __ne__(self, other):
return not self.__eq__(other)
def __lt__(self, other):
return self.name.__lt__(other.name)
def __gt__(self, other):
return self.name.__gt__(other.name)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
if self.modules:
return '%s (%s)' % (self.name, ', '.join(self.modules))
return self.name
class DirectoryTarget(CompletionTarget):
"""Directory target."""
def __init__(self, path, modules):
"""
:type path: str
:type modules: tuple[str]
"""
super(DirectoryTarget, self).__init__()
self.name = path
self.path = path
self.modules = modules
class TestTarget(CompletionTarget):
"""Generic test target."""
def __init__(self, path, module_path, module_prefix, base_path, symlink=None):
"""
:type path: str
:type module_path: str | None
:type module_prefix: str | None
:type base_path: str
:type symlink: bool | None
"""
super(TestTarget, self).__init__()
if symlink is None:
symlink = os.path.islink(to_bytes(path.rstrip(os.path.sep)))
self.name = path
self.path = path
self.base_path = base_path + '/' if base_path else None
self.symlink = symlink
name, ext = os.path.splitext(os.path.basename(self.path))
if module_path and is_subdir(path, module_path) and name != '__init__' and ext in MODULE_EXTENSIONS:
self.module = name[len(module_prefix or ''):].lstrip('_')
self.modules = (self.module,)
else:
self.module = None
self.modules = tuple()
aliases = [self.path, self.module]
parts = self.path.split('/')
for i in range(1, len(parts)):
alias = '%s/' % '/'.join(parts[:i])
aliases.append(alias)
aliases = [a for a in aliases if a]
self.aliases = tuple(sorted(aliases))
class IntegrationTarget(CompletionTarget):
"""Integration test target."""
non_posix = frozenset((
'network',
'windows',
))
categories = frozenset(non_posix | frozenset((
'posix',
'module',
'needs',
'skip',
)))
def __init__(self, path, modules, prefixes):
"""
:type path: str
:type modules: frozenset[str]
:type prefixes: dict[str, str]
"""
super(IntegrationTarget, self).__init__()
self.relative_path = os.path.relpath(path, data_context().content.integration_targets_path)
self.name = self.relative_path.replace(os.path.sep, '.')
self.path = path
# script_path and type
file_paths = data_context().content.get_files(path)
runme_path = os.path.join(path, 'runme.sh')
if runme_path in file_paths:
self.type = 'script'
self.script_path = runme_path
else:
self.type = 'role' # ansible will consider these empty roles, so ansible-test should as well
self.script_path = None
# static_aliases
aliases_path = os.path.join(path, 'aliases')
if aliases_path in file_paths:
static_aliases = tuple(read_lines_without_comments(aliases_path, remove_blank_lines=True))
else:
static_aliases = tuple()
# modules
if self.name in modules:
module_name = self.name
elif self.name.startswith('win_') and self.name[4:] in modules:
module_name = self.name[4:]
else:
module_name = None
self.modules = tuple(sorted(a for a in static_aliases + tuple([module_name]) if a in modules))
# groups
groups = [self.type]
groups += [a for a in static_aliases if a not in modules]
groups += ['module/%s' % m for m in self.modules]
if not self.modules:
groups.append('non_module')
if 'destructive' not in groups:
groups.append('non_destructive')
if '_' in self.name:
prefix = self.name[:self.name.find('_')]
else:
prefix = None
if prefix in prefixes:
group = prefixes[prefix]
if group != prefix:
group = '%s/%s' % (group, prefix)
groups.append(group)
if self.name.startswith('win_'):
groups.append('windows')
if self.name.startswith('connection_'):
groups.append('connection')
if self.name.startswith('setup_') or self.name.startswith('prepare_'):
groups.append('hidden')
if self.type not in ('script', 'role'):
groups.append('hidden')
targets_relative_path = data_context().content.integration_targets_path
# Collect skip entries before group expansion to avoid registering more specific skip entries as less specific versions.
self.skips = tuple(g for g in groups if g.startswith('skip/'))
# Collect file paths before group expansion to avoid including the directories.
# Ignore references to test targets, as those must be defined using `needs/target/*` or other target references.
self.needs_file = tuple(sorted(set('/'.join(g.split('/')[2:]) for g in groups if
g.startswith('needs/file/') and not g.startswith('needs/file/%s/' % targets_relative_path))))
# network platform
networks = [g.split('/')[1] for g in groups if g.startswith('network/')]
self.network_platform = networks[0] if networks else None
for group in itertools.islice(groups, 0, len(groups)):
if '/' in group:
parts = group.split('/')
for i in range(1, len(parts)):
groups.append('/'.join(parts[:i]))
if not any(g in self.non_posix for g in groups):
groups.append('posix')
# aliases
aliases = [self.name] + \
['%s/' % g for g in groups] + \
['%s/%s' % (g, self.name) for g in groups if g not in self.categories]
if 'hidden/' in aliases:
aliases = ['hidden/'] + ['hidden/%s' % a for a in aliases if not a.startswith('hidden/')]
self.aliases = tuple(sorted(set(aliases)))
# configuration
self.setup_once = tuple(sorted(set(g.split('/')[2] for g in groups if g.startswith('setup/once/'))))
self.setup_always = tuple(sorted(set(g.split('/')[2] for g in groups if g.startswith('setup/always/'))))
self.needs_target = tuple(sorted(set(g.split('/')[2] for g in groups if g.startswith('needs/target/'))))
class TargetPatternsNotMatched(ApplicationError):
"""One or more targets were not matched when a match was required."""
def __init__(self, patterns):
"""
:type patterns: set[str]
"""
self.patterns = sorted(patterns)
if len(patterns) > 1:
message = 'Target patterns not matched:\n%s' % '\n'.join(self.patterns)
else:
message = 'Target pattern not matched: %s' % self.patterns[0]
super(TargetPatternsNotMatched, self).__init__(message)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
changelogs/fragments/72699-validate-modules-default-for-bools.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
lib/ansible/modules/apt.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <[email protected]>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt
short_description: Manages apt-packages
description:
- Manages I(apt) packages (such as for Debian/Ubuntu).
version_added: "0.0.2"
options:
name:
description:
- A list of package names, like C(foo), or package specifier with version, like C(foo=1.0).
Name wildcards (fnmatch) like C(apt*) and version wildcards like C(foo=1.0*) are also supported.
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Indicates the desired package state. C(latest) ensures that the latest version is installed. C(build-dep) ensures the package build dependencies
are installed. C(fixed) attempt to correct a system with broken dependencies in place.
type: str
default: present
choices: [ absent, build-dep, latest, present, fixed ]
update_cache:
description:
- Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step.
aliases: [ update-cache ]
type: bool
default: 'no'
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
cache_valid_time:
description:
- Update the apt cache if its older than the I(cache_valid_time). This option is set in seconds.
- As of Ansible 2.4, if explicitly set, this sets I(update_cache=yes).
type: int
default: 0
purge:
description:
- Will force purging of configuration files if the module state is set to I(absent).
type: bool
default: 'no'
default_release:
description:
- Corresponds to the C(-t) option for I(apt) and sets pin priorities
aliases: [ default-release ]
type: str
install_recommends:
description:
- Corresponds to the C(--no-install-recommends) option for I(apt). C(yes) installs recommended packages. C(no) does not install
recommended packages. By default, Ansible will use the same defaults as the operating system. Suggested packages are never installed.
aliases: [ install-recommends ]
type: bool
force:
description:
- 'Corresponds to the C(--force-yes) to I(apt-get) and implies C(allow_unauthenticated: yes)'
- "This option will disable checking both the packages' signatures and the certificates of the
web servers they are downloaded from."
- 'This option *is not* the equivalent of passing the C(-f) flag to I(apt-get) on the command line'
- '**This is a destructive operation with the potential to destroy your system, and it should almost never be used.**
Please also see C(man apt-get) for more information.'
type: bool
default: 'no'
allow_unauthenticated:
description:
- Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
- 'C(allow_unauthenticated) is only supported with state: I(install)/I(present)'
aliases: [ allow-unauthenticated ]
type: bool
default: 'no'
version_added: "2.1"
upgrade:
description:
- If yes or safe, performs an aptitude safe-upgrade.
- If full, performs an aptitude full-upgrade.
- If dist, performs an apt-get dist-upgrade.
- 'Note: This does not upgrade a specific package, use state=latest for that.'
- 'Note: Since 2.4, apt-get is used as a fall-back if aptitude is not present.'
version_added: "1.1"
choices: [ dist, full, 'no', safe, 'yes' ]
default: 'no'
type: str
dpkg_options:
description:
- Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'
- Options should be supplied as comma separated list
default: force-confdef,force-confold
type: str
deb:
description:
- Path to a .deb package on the remote machine.
- If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
- Requires the C(xz-utils) package to extract the control file of the deb package to install.
type: path
required: false
version_added: "1.6"
autoremove:
description:
- If C(yes), remove unused dependency packages for all module states except I(build-dep). It can also be used as the only option.
- Previous to version 2.4, autoclean was also an alias for autoremove, now it is its own separate command. See documentation for further information.
type: bool
default: 'no'
version_added: "2.1"
autoclean:
description:
- If C(yes), cleans the local repository of retrieved package files that can no longer be downloaded.
type: bool
default: 'no'
version_added: "2.4"
policy_rc_d:
description:
- Force the exit code of /usr/sbin/policy-rc.d.
- For example, if I(policy_rc_d=101) the installed package will not trigger a service start.
- If /usr/sbin/policy-rc.d already exists, it is backed up and restored after the package installation.
- If C(null), the /usr/sbin/policy-rc.d isn't created/changed.
type: int
default: null
version_added: "2.8"
only_upgrade:
description:
- Only upgrade a package if it is already installed.
type: bool
default: 'no'
version_added: "2.1"
fail_on_autoremove:
description:
- 'Corresponds to the C(--no-remove) option for C(apt).'
- 'If C(yes), it is ensured that no packages will be removed or the task will fail.'
- 'C(fail_on_autoremove) is only supported with state except C(absent)'
type: bool
default: 'no'
version_added: "2.11"
force_apt_get:
description:
- Force usage of apt-get instead of aptitude
type: bool
default: 'no'
version_added: "2.4"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- aptitude (before 2.4)
author: "Matthew Williams (@mgwilliams)"
notes:
- Three of the upgrade modes (C(full), C(safe) and its alias C(yes)) required C(aptitude) up to 2.3, since 2.4 C(apt-get) is used as a fall-back.
- In most cases, packages installed with apt will start newly installed services by default. Most distributions have mechanisms to avoid this.
For example when installing Postgresql-9.5 in Debian 9, creating an excutable shell script (/usr/sbin/policy-rc.d) that throws
a return code of 101 will stop Postgresql 9.5 starting up after install. Remove the file or remove its execute permission afterwards.
- The apt-get commandline supports implicit regex matches here but we do not because it can let typos through easier
(If you typo C(foo) as C(fo) apt-get would install packages that have "fo" in their name with a warning and a prompt for the user.
Since we don't have warnings and prompts before installing we disallow this.Use an explicit fnmatch pattern if you want wildcarding)
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
'''
EXAMPLES = '''
- name: Install apache httpd (state=present is optional)
apt:
name: apache2
state: present
- name: Update repositories cache and install "foo" package
apt:
name: foo
update_cache: yes
- name: Remove "foo" package
apt:
name: foo
state: absent
- name: Install the package "foo"
apt:
name: foo
- name: Install a list of packages
apt:
pkg:
- foo
- foo-tools
- name: Install the version '1.00' of package "foo"
apt:
name: foo=1.00
- name: Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
apt:
name: nginx
state: latest
default_release: squeeze-backports
update_cache: yes
- name: Install zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed.
apt:
name: zfsutils-linux
state: latest
fail_on_autoremove: yes
- name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
apt:
name: openjdk-6-jdk
state: latest
install_recommends: no
- name: Update all packages to their latest version
apt:
name: "*"
state: latest
- name: Upgrade the OS (apt-get dist-upgrade)
apt:
upgrade: dist
- name: Run the equivalent of "apt-get update" as a separate step
apt:
update_cache: yes
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
apt:
update_cache: yes
cache_valid_time: 3600
- name: Pass options to dpkg on run
apt:
upgrade: dist
update_cache: yes
dpkg_options: 'force-confold,force-confdef'
- name: Install a .deb package
apt:
deb: /tmp/mypackage.deb
- name: Install the build dependencies for package "foo"
apt:
pkg: foo
state: build-dep
- name: Install a .deb package from the internet
apt:
deb: https://example.com/python-ppq_0.1-1_all.deb
- name: Remove useless packages from the cache
apt:
autoclean: yes
- name: Remove dependencies that are no longer required
apt:
autoremove: yes
'''
RETURN = '''
cache_updated:
description: if the cache was updated or not
returned: success, in some cases
type: bool
sample: True
cache_update_time:
description: time of the last cache update (0 if unknown)
returned: success, in some cases
type: int
sample: 1425828348000
stdout:
description: output from apt
returned: success, when needed
type: str
sample: "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n apache2-bin ..."
stderr:
description: error output from apt
returned: success, when needed
type: str
sample: "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to ..."
''' # NOQA
# added to stave off future warnings about apt api
import warnings
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
import datetime
import fnmatch
import itertools
import os
import shutil
import re
import sys
import tempfile
import time
import random
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.urls import fetch_file
# APT related constants
APT_ENV_VARS = dict(
DEBIAN_FRONTEND='noninteractive',
DEBIAN_PRIORITY='critical',
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the C locale when running commands
LANG='C',
LC_ALL='C',
LC_MESSAGES='C',
LC_CTYPE='C',
)
DPKG_OPTIONS = 'force-confdef,force-confold'
APT_GET_ZERO = "\n0 upgraded, 0 newly installed"
APTITUDE_ZERO = "\n0 packages upgraded, 0 newly installed"
APT_LISTS_PATH = "/var/lib/apt/lists"
APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp"
APT_MARK_INVALID_OP = 'Invalid operation'
APT_MARK_INVALID_OP_DEB6 = 'Usage: apt-mark [options] {markauto|unmarkauto} packages'
CLEAN_OP_CHANGED_STR = dict(
autoremove='The following packages will be REMOVED',
# "Del python3-q 2.4-1 [24 kB]"
autoclean='Del ',
)
HAS_PYTHON_APT = True
try:
import apt
import apt.debfile
import apt_pkg
except ImportError:
HAS_PYTHON_APT = False
if sys.version_info[0] < 3:
PYTHON_APT = 'python-apt'
else:
PYTHON_APT = 'python3-apt'
class PolicyRcD(object):
"""
This class is a context manager for the /usr/sbin/policy-rc.d file.
It allow the user to prevent dpkg to start the corresponding service when installing
a package.
https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
"""
def __init__(self, module):
# we need the module for later use (eg. fail_json)
self.m = module
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
if os.path.exists('/usr/sbin/policy-rc.d'):
self.backup_dir = tempfile.mkdtemp(prefix="ansible")
else:
self.backup_dir = None
def __enter__(self):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists we back it up
if self.backup_dir:
try:
shutil.move('/usr/sbin/policy-rc.d', self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move /usr/sbin/policy-rc.d to %s" % self.backup_dir)
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
try:
with open('/usr/sbin/policy-rc.d', 'w') as policy_rc_d:
policy_rc_d.write('#!/bin/sh\nexit %d\n' % self.m.params['policy_rc_d'])
os.chmod('/usr/sbin/policy-rc.d', 0o0755)
except Exception:
self.m.fail_json(msg="Failed to create or chmod /usr/sbin/policy-rc.d")
def __exit__(self, type, value, traceback):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
if self.backup_dir:
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
try:
shutil.move(os.path.join(self.backup_dir, 'policy-rc.d'),
'/usr/sbin/policy-rc.d')
os.rmdir(self.tmpdir_name)
except Exception:
self.m.fail_json(msg="Fail to move back %s to /usr/sbin/policy-rc.d"
% os.path.join(self.backup_dir, 'policy-rc.d'))
else:
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
try:
os.remove('/usr/sbin/policy-rc.d')
except Exception:
self.m.fail_json(msg="Fail to remove /usr/sbin/policy-rc.d (after package manipulation)")
def package_split(pkgspec):
parts = pkgspec.split('=', 1)
version = None
if len(parts) > 1:
version = parts[1]
return parts[0], version
def package_versions(pkgname, pkg, pkg_cache):
try:
versions = set(p.version for p in pkg.versions)
except AttributeError:
# assume older version of python-apt is installed
# apt.package.Package#versions require python-apt >= 0.7.9.
pkg_cache_list = (p for p in pkg_cache.Packages if p.Name == pkgname)
pkg_versions = (p.VersionList for p in pkg_cache_list)
versions = set(p.VerStr for p in itertools.chain(*pkg_versions))
return versions
def package_version_compare(version, other_version):
try:
return apt_pkg.version_compare(version, other_version)
except AttributeError:
return apt_pkg.VersionCompare(version, other_version)
def package_status(m, pkgname, version, cache, state):
try:
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
pkg = cache[pkgname]
ll_pkg = cache._cache[pkgname] # the low-level package object
except KeyError:
if state == 'install':
try:
provided_packages = cache.get_providing_packages(pkgname)
if provided_packages:
is_installed = False
upgradable = False
version_ok = False
# when virtual package providing only one package, look up status of target package
if cache.is_virtual_package(pkgname) and len(provided_packages) == 1:
package = provided_packages[0]
installed, version_ok, upgradable, has_files = package_status(m, package.name, version, cache, state='install')
if installed:
is_installed = True
return is_installed, version_ok, upgradable, False
m.fail_json(msg="No package matching '%s' is available" % pkgname)
except AttributeError:
# python-apt version too old to detect virtual packages
# mark as upgradable and let apt-get install deal with it
return False, False, True, False
else:
return False, False, False, False
try:
has_files = len(pkg.installed_files) > 0
except UnicodeDecodeError:
has_files = True
except AttributeError:
has_files = False # older python-apt cannot be used to determine non-purged
try:
package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED
except AttributeError: # python-apt 0.7.X has very weak low-level object
try:
# might not be necessary as python-apt post-0.7.X should have current_state property
package_is_installed = pkg.is_installed
except AttributeError:
# assume older version of python-apt is installed
package_is_installed = pkg.isInstalled
version_is_installed = package_is_installed
if version:
versions = package_versions(pkgname, pkg, cache._cache)
avail_upgrades = fnmatch.filter(versions, version)
if package_is_installed:
try:
installed_version = pkg.installed.version
except AttributeError:
installed_version = pkg.installedVersion
# check if the version is matched as well
version_is_installed = fnmatch.fnmatch(installed_version, version)
# Only claim the package is upgradable if a candidate matches the version
package_is_upgradable = False
for candidate in avail_upgrades:
if package_version_compare(candidate, installed_version) > 0:
package_is_upgradable = True
break
else:
package_is_upgradable = bool(avail_upgrades)
else:
try:
package_is_upgradable = pkg.is_upgradable
except AttributeError:
# assume older version of python-apt is installed
package_is_upgradable = pkg.isUpgradable
return package_is_installed, version_is_installed, package_is_upgradable, has_files
def expand_dpkg_options(dpkg_options_compressed):
options_list = dpkg_options_compressed.split(',')
dpkg_options = ""
for dpkg_option in options_list:
dpkg_options = '%s -o "Dpkg::Options::=--%s"' \
% (dpkg_options, dpkg_option)
return dpkg_options.strip()
def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
# Note: apt-get does implicit regex matching when an exact package name
# match is not found. Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
#
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
new_pkgspec = []
if pkgspec:
for pkgspec_pattern in pkgspec:
pkgname_pattern, version = package_split(pkgspec_pattern)
# note that none of these chars is allowed in a (debian) pkgname
if frozenset('*?[]!').intersection(pkgname_pattern):
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
if ":" not in pkgname_pattern:
# Filter the multiarch packages from the cache only once
try:
pkg_name_cache = _non_multiarch
except NameError:
pkg_name_cache = _non_multiarch = [pkg.name for pkg in cache if ':' not in pkg.name] # noqa: F841
else:
# Create a cache of pkg_names including multiarch only once
try:
pkg_name_cache = _all_pkg_names
except NameError:
pkg_name_cache = _all_pkg_names = [pkg.name for pkg in cache] # noqa: F841
matches = fnmatch.filter(pkg_name_cache, pkgname_pattern)
if not matches:
m.fail_json(msg="No package(s) matching '%s' available" % str(pkgname_pattern))
else:
new_pkgspec.extend(matches)
else:
# No wildcards in name
new_pkgspec.append(pkgspec_pattern)
return new_pkgspec
def parse_diff(output):
diff = to_native(output).splitlines()
try:
# check for start marker from aptitude
diff_start = diff.index('Resolving dependencies...')
except ValueError:
try:
# check for start marker from apt-get
diff_start = diff.index('Reading state information...')
except ValueError:
# show everything
diff_start = -1
try:
# check for end marker line from both apt-get and aptitude
diff_end = next(i for i, item in enumerate(diff) if re.match('[0-9]+ (packages )?upgraded', item))
except StopIteration:
diff_end = len(diff)
diff_start += 1
diff_end += 1
return {'prepared': '\n'.join(diff[diff_start:diff_end])}
def mark_installed_manually(m, packages):
if not packages:
return
apt_mark_cmd_path = m.get_bin_path("apt-mark")
# https://github.com/ansible/ansible/issues/40531
if apt_mark_cmd_path is None:
m.warn("Could not find apt-mark binary, not marking package(s) as manually installed.")
return
cmd = "%s manual %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if APT_MARK_INVALID_OP in err or APT_MARK_INVALID_OP_DEB6 in err:
cmd = "%s unmarkauto %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
def install(m, pkgspec, cache, upgrade=False, default_release=None,
install_recommends=None, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS),
build_dep=False, fixed=False, autoremove=False, fail_on_autoremove=False, only_upgrade=False,
allow_unauthenticated=False):
pkg_list = []
packages = ""
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
package_names = []
for package in pkgspec:
if build_dep:
# Let apt decide what to install
pkg_list.append("'%s'" % package)
continue
name, version = package_split(package)
package_names.append(name)
installed, installed_version, upgradable, has_files = package_status(m, name, version, cache, state='install')
if (not installed and not only_upgrade) or (installed and not installed_version) or (upgrade and upgradable):
pkg_list.append("'%s'" % package)
if installed_version and upgradable and version:
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
#
# We do not apply the upgrade flag because we cannot specify both
# a version and state=latest. (This behaviour mirrors how apt
# treats a version with wildcard in the package)
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if packages:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
if only_upgrade:
only_upgrade = '--only-upgrade'
else:
only_upgrade = ''
if fixed:
fixed = '--fix-broken'
else:
fixed = ''
if build_dep:
cmd = "%s -y %s %s %s %s %s %s build-dep %s" % (APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, fail_on_autoremove, check_arg, packages)
else:
cmd = "%s -y %s %s %s %s %s %s %s install %s" % \
(APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, autoremove, fail_on_autoremove, check_arg, packages)
if default_release:
cmd += " -t '%s'" % (default_release,)
if install_recommends is False:
cmd += " -o APT::Install-Recommends=no"
elif install_recommends is True:
cmd += " -o APT::Install-Recommends=yes"
# install_recommends is None uses the OS default
if allow_unauthenticated:
cmd += " --allow-unauthenticated"
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
status = True
changed = True
if build_dep:
changed = APT_GET_ZERO not in out
data = dict(changed=changed, stdout=out, stderr=err, diff=diff)
if rc:
status = False
data = dict(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
else:
status = True
data = dict(changed=False)
if not build_dep:
mark_installed_manually(m, package_names)
return (status, data)
def get_field_of_deb(m, deb_file, field="Version"):
cmd_dpkg = m.get_bin_path("dpkg", True)
cmd = cmd_dpkg + " --field %s %s" % (deb_file, field)
rc, stdout, stderr = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
return to_native(stdout).strip('\n')
def install_deb(m, debs, cache, force, fail_on_autoremove, install_recommends, allow_unauthenticated, dpkg_options):
changed = False
deps_to_install = []
pkgs_to_install = []
for deb_file in debs.split(','):
try:
pkg = apt.debfile.DebPackage(deb_file)
pkg_name = get_field_of_deb(m, deb_file, "Package")
pkg_version = get_field_of_deb(m, deb_file, "Version")
if len(apt_pkg.get_architectures()) > 1:
pkg_arch = get_field_of_deb(m, deb_file, "Architecture")
pkg_key = "%s:%s" % (pkg_name, pkg_arch)
else:
pkg_key = pkg_name
try:
installed_pkg = apt.Cache()[pkg_key]
installed_version = installed_pkg.installed.version
if package_version_compare(pkg_version, installed_version) == 0:
# Does not need to down-/upgrade, move on to next package
continue
except Exception:
# Must not be installed, continue with installation
pass
# Check if package is installable
if not pkg.check() and not force:
m.fail_json(msg=pkg._failure_string)
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
deps_to_install.extend(pkg.missing_deps)
except Exception as e:
m.fail_json(msg="Unable to install package: %s" % to_native(e))
# and add this deb to the list of packages to install
pkgs_to_install.append(deb_file)
# install the deps through apt
retvals = {}
if deps_to_install:
(success, retvals) = install(m=m, pkgspec=deps_to_install, cache=cache,
install_recommends=install_recommends,
fail_on_autoremove=fail_on_autoremove,
allow_unauthenticated=allow_unauthenticated,
dpkg_options=expand_dpkg_options(dpkg_options))
if not success:
m.fail_json(**retvals)
changed = retvals.get('changed', False)
if pkgs_to_install:
options = ' '.join(["--%s" % x for x in dpkg_options.split(",")])
if m.check_mode:
options += " --simulate"
if force:
options += " --force-all"
cmd = "dpkg %s -i %s" % (options, " ".join(pkgs_to_install))
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if "stdout" in retvals:
stdout = retvals["stdout"] + out
else:
stdout = out
if "diff" in retvals:
diff = retvals["diff"]
if 'prepared' in diff:
diff['prepared'] += '\n\n' + out
else:
diff = parse_diff(out)
if "stderr" in retvals:
stderr = retvals["stderr"] + err
else:
stderr = err
if rc == 0:
m.exit_json(changed=True, stdout=stdout, stderr=stderr, diff=diff)
else:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
else:
m.exit_json(changed=changed, stdout=retvals.get('stdout', ''), stderr=retvals.get('stderr', ''), diff=retvals.get('diff', ''))
def remove(m, pkgspec, cache, purge=False, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False):
pkg_list = []
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
for package in pkgspec:
name, version = package_split(package)
installed, installed_version, upgradable, has_files = package_status(m, name, version, cache, state='remove')
if installed_version or (has_files and purge):
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if not packages:
m.exit_json(changed=False)
else:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -q -y %s %s %s %s %s remove %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, autoremove, check_arg, packages)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err, rc=rc)
m.exit_json(changed=True, stdout=out, stderr=err, diff=diff)
def cleanup(m, purge=False, force=False, operation=None,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS)):
if operation not in frozenset(['autoremove', 'autoclean']):
raise AssertionError('Expected "autoremove" or "autoclean" cleanup operation, got %s' % operation)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -y %s %s %s %s %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, operation, check_arg)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get %s' failed: %s" % (operation, err), stdout=out, stderr=err, rc=rc)
changed = CLEAN_OP_CHANGED_STR[operation] in out
m.exit_json(changed=changed, stdout=out, stderr=err, diff=diff)
def upgrade(m, mode="yes", force=False, default_release=None,
use_apt_get=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False, fail_on_autoremove=False,
allow_unauthenticated=False,
):
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
apt_cmd = None
prompt_regex = None
if mode == "dist" or (mode == "full" and use_apt_get):
# apt-get dist-upgrade
apt_cmd = APT_GET_CMD
upgrade_command = "dist-upgrade %s" % (autoremove)
elif mode == "full" and not use_apt_get:
# aptitude full-upgrade
apt_cmd = APTITUDE_CMD
upgrade_command = "full-upgrade"
else:
if use_apt_get:
apt_cmd = APT_GET_CMD
upgrade_command = "upgrade --with-new-pkgs %s" % (autoremove)
else:
# aptitude safe-upgrade # mode=yes # default
apt_cmd = APTITUDE_CMD
upgrade_command = "safe-upgrade"
prompt_regex = r"(^Do you want to ignore this warning and proceed anyway\?|^\*\*\*.*\[default=.*\])"
if force:
if apt_cmd == APT_GET_CMD:
force_yes = '--force-yes'
else:
force_yes = '--assume-yes --allow-untrusted'
else:
force_yes = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
allow_unauthenticated = '--allow-unauthenticated' if allow_unauthenticated else ''
if apt_cmd is None:
if use_apt_get:
apt_cmd = APT_GET_CMD
else:
m.fail_json(msg="Unable to find APTITUDE in path. Please make sure "
"to have APTITUDE in path or use 'force_apt_get=True'")
apt_cmd_path = m.get_bin_path(apt_cmd, required=True)
cmd = '%s -y %s %s %s %s %s %s' % (apt_cmd_path, dpkg_options, force_yes, fail_on_autoremove, allow_unauthenticated, check_arg, upgrade_command)
if default_release:
cmd += " -t '%s'" % (default_release,)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd, prompt_regex=prompt_regex)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out, rc=rc)
if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out):
m.exit_json(changed=False, msg=out, stdout=out, stderr=err)
m.exit_json(changed=True, msg=out, stdout=out, stderr=err, diff=diff)
def get_cache_mtime():
"""Return mtime of a valid apt cache file.
Stat the apt cache file and if no cache file is found return 0
:returns: ``int``
"""
cache_time = 0
if os.path.exists(APT_UPDATE_SUCCESS_STAMP_PATH):
cache_time = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime
elif os.path.exists(APT_LISTS_PATH):
cache_time = os.stat(APT_LISTS_PATH).st_mtime
return cache_time
def get_updated_cache_time():
"""Return the mtime time stamp and the updated cache time.
Always retrieve the mtime of the apt cache or set the `cache_mtime`
variable to 0
:returns: ``tuple``
"""
cache_mtime = get_cache_mtime()
mtimestamp = datetime.datetime.fromtimestamp(cache_mtime)
updated_cache_time = int(time.mktime(mtimestamp.timetuple()))
return mtimestamp, updated_cache_time
# https://github.com/ansible/ansible-modules-core/issues/2951
def get_cache(module):
'''Attempt to get the cache object and update till it works'''
cache = None
try:
cache = apt.Cache()
except SystemError as e:
if '/var/lib/apt/lists/' in to_native(e).lower():
# update cache until files are fixed or retries exceeded
retries = 0
while retries < 2:
(rc, so, se) = module.run_command(['apt-get', 'update', '-q'])
retries += 1
if rc == 0:
break
if rc != 0:
module.fail_json(msg='Updating the cache to correct corrupt package lists failed:\n%s\n%s' % (to_native(e), so + se), rc=rc)
# try again
cache = apt.Cache()
else:
module.fail_json(msg=to_native(e))
return cache
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'build-dep', 'fixed', 'latest', 'present']),
update_cache=dict(type='bool', aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
cache_valid_time=dict(type='int', default=0),
purge=dict(type='bool', default=False),
package=dict(type='list', elements='str', aliases=['pkg', 'name']),
deb=dict(type='path'),
default_release=dict(type='str', aliases=['default-release']),
install_recommends=dict(type='bool', aliases=['install-recommends']),
force=dict(type='bool', default=False),
upgrade=dict(type='str', choices=['dist', 'full', 'no', 'safe', 'yes'], default='no'),
dpkg_options=dict(type='str', default=DPKG_OPTIONS),
autoremove=dict(type='bool', default=False),
autoclean=dict(type='bool', default=False),
fail_on_autoremove=dict(type='bool', default=False),
policy_rc_d=dict(type='int', default=None),
only_upgrade=dict(type='bool', default=False),
force_apt_get=dict(type='bool', default=False),
allow_unauthenticated=dict(type='bool', default=False, aliases=['allow-unauthenticated']),
),
mutually_exclusive=[['deb', 'package', 'upgrade']],
required_one_of=[['autoremove', 'deb', 'package', 'update_cache', 'upgrade']],
supports_check_mode=True,
)
module.run_command_environ_update = APT_ENV_VARS
if not HAS_PYTHON_APT:
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % PYTHON_APT)
try:
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
if module.params.get('update_cache') is False:
module.warn("Auto-installing missing dependency without updating cache: %s" % PYTHON_APT)
else:
module.warn("Updating cache and auto-installing missing dependency: %s" % PYTHON_APT)
module.run_command(['apt-get', 'update'], check_rc=True)
module.run_command(['apt-get', 'install', '--no-install-recommends', PYTHON_APT, '-y', '-q'], check_rc=True)
global apt, apt_pkg
import apt
import apt.debfile
import apt_pkg
except ImportError:
module.fail_json(msg="Could not import python modules: apt, apt_pkg. "
"Please install %s package." % PYTHON_APT)
global APTITUDE_CMD
APTITUDE_CMD = module.get_bin_path("aptitude", False)
global APT_GET_CMD
APT_GET_CMD = module.get_bin_path("apt-get")
p = module.params
if p['upgrade'] == 'no':
p['upgrade'] = None
use_apt_get = p['force_apt_get']
if not use_apt_get and not APTITUDE_CMD:
use_apt_get = True
updated_cache = False
updated_cache_time = 0
install_recommends = p['install_recommends']
allow_unauthenticated = p['allow_unauthenticated']
dpkg_options = expand_dpkg_options(p['dpkg_options'])
autoremove = p['autoremove']
fail_on_autoremove = p['fail_on_autoremove']
autoclean = p['autoclean']
# Get the cache object
cache = get_cache(module)
try:
if p['default_release']:
try:
apt_pkg.config['APT::Default-Release'] = p['default_release']
except AttributeError:
apt_pkg.Config['APT::Default-Release'] = p['default_release']
# reopen cache w/ modified config
cache.open(progress=None)
mtimestamp, updated_cache_time = get_updated_cache_time()
# Cache valid time is default 0, which will update the cache if
# needed and `update_cache` was set to true
updated_cache = False
if p['update_cache'] or p['cache_valid_time']:
now = datetime.datetime.now()
tdelta = datetime.timedelta(seconds=p['cache_valid_time'])
if not mtimestamp + tdelta >= now:
# Retry to update the cache with exponential backoff
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
cache.open(progress=None)
mtimestamp, post_cache_update_time = get_updated_cache_time()
if updated_cache_time != post_cache_update_time:
updated_cache = True
updated_cache_time = post_cache_update_time
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=updated_cache,
cache_updated=updated_cache,
cache_update_time=updated_cache_time
)
force_yes = p['force']
if p['upgrade']:
upgrade(module, p['upgrade'], force_yes, p['default_release'], use_apt_get, dpkg_options, autoremove, fail_on_autoremove, allow_unauthenticated)
if p['deb']:
if p['state'] != 'present':
module.fail_json(msg="deb only supports state=present")
if '://' in p['deb']:
p['deb'] = fetch_file(module, p['deb'])
install_deb(module, p['deb'], cache,
install_recommends=install_recommends,
allow_unauthenticated=allow_unauthenticated,
force=force_yes, fail_on_autoremove=fail_on_autoremove, dpkg_options=p['dpkg_options'])
unfiltered_packages = p['package'] or ()
packages = [package.strip() for package in unfiltered_packages if package != '*']
all_installed = '*' in unfiltered_packages
latest = p['state'] == 'latest'
if latest and all_installed:
if packages:
module.fail_json(msg='unable to install additional packages when upgrading all installed packages')
upgrade(module, 'yes', force_yes, p['default_release'], use_apt_get, dpkg_options, autoremove, fail_on_autoremove, allow_unauthenticated)
if packages:
for package in packages:
if package.count('=') > 1:
module.fail_json(msg="invalid package spec: %s" % package)
if latest and '=' in package:
module.fail_json(msg='version number inconsistent with state=latest: %s' % package)
if not packages:
if autoclean:
cleanup(module, p['purge'], force=force_yes, operation='autoclean', dpkg_options=dpkg_options)
if autoremove:
cleanup(module, p['purge'], force=force_yes, operation='autoremove', dpkg_options=dpkg_options)
if p['state'] in ('latest', 'present', 'build-dep', 'fixed'):
state_upgrade = False
state_builddep = False
state_fixed = False
if p['state'] == 'latest':
state_upgrade = True
if p['state'] == 'build-dep':
state_builddep = True
if p['state'] == 'fixed':
state_fixed = True
success, retvals = install(
module,
packages,
cache,
upgrade=state_upgrade,
default_release=p['default_release'],
install_recommends=install_recommends,
force=force_yes,
dpkg_options=dpkg_options,
build_dep=state_builddep,
fixed=state_fixed,
autoremove=autoremove,
fail_on_autoremove=fail_on_autoremove,
only_upgrade=p['only_upgrade'],
allow_unauthenticated=allow_unauthenticated
)
# Store if the cache has been updated
retvals['cache_updated'] = updated_cache
# Store when the update time was last
retvals['cache_update_time'] = updated_cache_time
if success:
module.exit_json(**retvals)
else:
module.fail_json(**retvals)
elif p['state'] == 'absent':
remove(module, packages, cache, p['purge'], force=force_yes, dpkg_options=dpkg_options, autoremove=autoremove)
except apt.cache.LockFailedException as lockFailedException:
module.fail_json(msg="Failed to lock apt for exclusive operation: %s" % lockFailedException)
except apt.cache.FetchFailedException as fetchFailedException:
module.fail_json(msg="Could not fetch updated apt files: %s" % fetchFailedException)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
lib/ansible/modules/cron.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Dane Summers <[email protected]>
# Copyright: (c) 2013, Mike Grozak <[email protected]>
# Copyright: (c) 2013, Patrick Callahan <[email protected]>
# Copyright: (c) 2015, Evan Kaufman <[email protected]>
# Copyright: (c) 2015, Luca Berruti <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: cron
short_description: Manage cron.d and crontab entries
description:
- Use this module to manage crontab and environment variables entries. This module allows
you to create environment variables and named crontab entries, update, or delete them.
- 'When crontab jobs are managed: the module includes one line with the description of the
crontab entry C("#Ansible: <name>") corresponding to the "name" passed to the module,
which is used by future ansible/module calls to find/check the state. The "name"
parameter should be unique, and changing the "name" value will result in a new cron
task being created (or a different one being removed).'
- When environment variables are managed, no comment line is added, but, when the module
needs to find/check the state, it uses the "name" parameter to find the environment
variable definition line.
- When using symbols such as %, they must be properly escaped.
version_added: "0.9"
options:
name:
description:
- Description of a crontab entry or, if env is set, the name of environment variable.
- Required if I(state=absent).
- Note that if name is not set and I(state=present), then a
new crontab entry will always be created, regardless of existing ones.
- This parameter will always be required in future releases.
type: str
user:
description:
- The specific user whose crontab should be modified.
- When unset, this parameter defaults to the current user.
type: str
job:
description:
- The command to execute or, if env is set, the value of environment variable.
- The command should not contain line breaks.
- Required if I(state=present).
type: str
aliases: [ value ]
state:
description:
- Whether to ensure the job or environment variable is present or absent.
type: str
choices: [ absent, present ]
default: present
cron_file:
description:
- If specified, uses this file instead of an individual user's crontab.
- If this is a relative path, it is interpreted with respect to I(/etc/cron.d).
- If it is absolute, it will typically be C(/etc/crontab).
- Many linux distros expect (and some require) the filename portion to consist solely
of upper- and lower-case letters, digits, underscores, and hyphens.
- To use the I(cron_file) parameter you must specify the I(user) as well.
type: str
backup:
description:
- If set, create a backup of the crontab before it is modified.
The location of the backup is returned in the C(backup_file) variable by this module.
type: bool
default: no
minute:
description:
- Minute when the job should run (C(0-59), C(*), C(*/2), and so on).
type: str
default: "*"
hour:
description:
- Hour when the job should run (C(0-23), C(*), C(*/2), and so on).
type: str
default: "*"
day:
description:
- Day of the month the job should run (C(1-31), C(*), C(*/2), and so on).
type: str
default: "*"
aliases: [ dom ]
month:
description:
- Month of the year the job should run (C(1-12), C(*), C(*/2), and so on).
type: str
default: "*"
weekday:
description:
- Day of the week that the job should run (C(0-6) for Sunday-Saturday, C(*), and so on).
type: str
default: "*"
aliases: [ dow ]
reboot:
description:
- If the job should be run at reboot. This option is deprecated. Users should use I(special_time).
version_added: "1.0"
type: bool
default: no
special_time:
description:
- Special time specification nickname.
type: str
choices: [ annually, daily, hourly, monthly, reboot, weekly, yearly ]
version_added: "1.3"
disabled:
description:
- If the job should be disabled (commented out) in the crontab.
- Only has effect if I(state=present).
type: bool
default: no
version_added: "2.0"
env:
description:
- If set, manages a crontab's environment variable.
- New variables are added on top of crontab.
- I(name) and I(value) parameters are the name and the value of environment variable.
type: bool
default: no
version_added: "2.1"
insertafter:
description:
- Used with I(state=present) and I(env).
- If specified, the environment variable will be inserted after the declaration of specified environment variable.
type: str
version_added: "2.1"
insertbefore:
description:
- Used with I(state=present) and I(env).
- If specified, the environment variable will be inserted before the declaration of specified environment variable.
type: str
version_added: "2.1"
requirements:
- cron (or cronie on CentOS)
author:
- Dane Summers (@dsummersl)
- Mike Grozak (@rhaido)
- Patrick Callahan (@dirtyharrycallahan)
- Evan Kaufman (@EvanK)
- Luca Berruti (@lberruti)
notes:
- Supports C(check_mode).
'''
EXAMPLES = r'''
- name: Ensure a job that runs at 2 and 5 exists. Creates an entry like "0 5,2 * * ls -alh > /dev/null"
ansible.builtin.cron:
name: "check dirs"
minute: "0"
hour: "5,2"
job: "ls -alh > /dev/null"
- name: 'Ensure an old job is no longer present. Removes any job that is prefixed by "#Ansible: an old job" from the crontab'
ansible.builtin.cron:
name: "an old job"
state: absent
- name: Creates an entry like "@reboot /some/job.sh"
ansible.builtin.cron:
name: "a job for reboot"
special_time: reboot
job: "/some/job.sh"
- name: Creates an entry like "PATH=/opt/bin" on top of crontab
ansible.builtin.cron:
name: PATH
env: yes
job: /opt/bin
- name: Creates an entry like "APP_HOME=/srv/app" and insert it after PATH declaration
ansible.builtin.cron:
name: APP_HOME
env: yes
job: /srv/app
insertafter: PATH
- name: Creates a cron file under /etc/cron.d
ansible.builtin.cron:
name: yum autoupdate
weekday: "2"
minute: "0"
hour: "12"
user: root
job: "YUMINTERACTIVE=0 /usr/sbin/yum-autoupdate"
cron_file: ansible_yum-autoupdate
- name: Removes a cron file from under /etc/cron.d
ansible.builtin.cron:
name: "yum autoupdate"
cron_file: ansible_yum-autoupdate
state: absent
- name: Removes "APP_HOME" environment variable from crontab
ansible.builtin.cron:
name: APP_HOME
env: yes
state: absent
'''
RETURN = r'''#'''
import os
import platform
import pwd
import re
import sys
import tempfile
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_bytes, to_native
from ansible.module_utils.six.moves import shlex_quote
class CronTabError(Exception):
pass
class CronTab(object):
"""
CronTab object to write time based crontab file
user - the user of the crontab (defaults to current user)
cron_file - a cron file under /etc/cron.d, or an absolute path
"""
def __init__(self, module, user=None, cron_file=None):
self.module = module
self.user = user
self.root = (os.getuid() == 0)
self.lines = None
self.ansible = "#Ansible: "
self.n_existing = ''
self.cron_cmd = self.module.get_bin_path('crontab', required=True)
if cron_file:
if os.path.isabs(cron_file):
self.cron_file = cron_file
self.b_cron_file = to_bytes(cron_file, errors='surrogate_or_strict')
else:
self.cron_file = os.path.join('/etc/cron.d', cron_file)
self.b_cron_file = os.path.join(b'/etc/cron.d', to_bytes(cron_file, errors='surrogate_or_strict'))
else:
self.cron_file = None
self.read()
def read(self):
# Read in the crontab from the system
self.lines = []
if self.cron_file:
# read the cronfile
try:
f = open(self.b_cron_file, 'rb')
self.n_existing = to_native(f.read(), errors='surrogate_or_strict')
self.lines = self.n_existing.splitlines()
f.close()
except IOError:
# cron file does not exist
return
except Exception:
raise CronTabError("Unexpected error:", sys.exc_info()[0])
else:
# using safely quoted shell for now, but this really should be two non-shell calls instead. FIXME
(rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True)
if rc != 0 and rc != 1: # 1 can mean that there are no jobs.
raise CronTabError("Unable to read crontab")
self.n_existing = out
lines = out.splitlines()
count = 0
for l in lines:
if count > 2 or (not re.match(r'# DO NOT EDIT THIS FILE - edit the master and reinstall.', l) and
not re.match(r'# \(/tmp/.*installed on.*\)', l) and
not re.match(r'# \(.*version.*\)', l)):
self.lines.append(l)
else:
pattern = re.escape(l) + '[\r\n]?'
self.n_existing = re.sub(pattern, '', self.n_existing, 1)
count += 1
def is_empty(self):
if len(self.lines) == 0:
return True
else:
return False
def write(self, backup_file=None):
"""
Write the crontab to the system. Saves all information.
"""
if backup_file:
fileh = open(backup_file, 'wb')
elif self.cron_file:
fileh = open(self.b_cron_file, 'wb')
else:
filed, path = tempfile.mkstemp(prefix='crontab')
os.chmod(path, int('0644', 8))
fileh = os.fdopen(filed, 'wb')
fileh.write(to_bytes(self.render()))
fileh.close()
# return if making a backup
if backup_file:
return
# Add the entire crontab back to the user crontab
if not self.cron_file:
# quoting shell args for now but really this should be two non-shell calls. FIXME
(rc, out, err) = self.module.run_command(self._write_execute(path), use_unsafe_shell=True)
os.unlink(path)
if rc != 0:
self.module.fail_json(msg=err)
# set SELinux permissions
if self.module.selinux_enabled() and self.cron_file:
self.module.set_default_selinux_context(self.cron_file, False)
def do_comment(self, name):
return "%s%s" % (self.ansible, name)
def add_job(self, name, job):
# Add the comment
self.lines.append(self.do_comment(name))
# Add the job
self.lines.append("%s" % (job))
def update_job(self, name, job):
return self._update_job(name, job, self.do_add_job)
def do_add_job(self, lines, comment, job):
lines.append(comment)
lines.append("%s" % (job))
def remove_job(self, name):
return self._update_job(name, "", self.do_remove_job)
def do_remove_job(self, lines, comment, job):
return None
def add_env(self, decl, insertafter=None, insertbefore=None):
if not (insertafter or insertbefore):
self.lines.insert(0, decl)
return
if insertafter:
other_name = insertafter
elif insertbefore:
other_name = insertbefore
other_decl = self.find_env(other_name)
if len(other_decl) > 0:
if insertafter:
index = other_decl[0] + 1
elif insertbefore:
index = other_decl[0]
self.lines.insert(index, decl)
return
self.module.fail_json(msg="Variable named '%s' not found." % other_name)
def update_env(self, name, decl):
return self._update_env(name, decl, self.do_add_env)
def do_add_env(self, lines, decl):
lines.append(decl)
def remove_env(self, name):
return self._update_env(name, '', self.do_remove_env)
def do_remove_env(self, lines, decl):
return None
def remove_job_file(self):
try:
os.unlink(self.cron_file)
return True
except OSError:
# cron file does not exist
return False
except Exception:
raise CronTabError("Unexpected error:", sys.exc_info()[0])
def find_job(self, name, job=None):
# attempt to find job by 'Ansible:' header comment
comment = None
for l in self.lines:
if comment is not None:
if comment == name:
return [comment, l]
else:
comment = None
elif re.match(r'%s' % self.ansible, l):
comment = re.sub(r'%s' % self.ansible, '', l)
# failing that, attempt to find job by exact match
if job:
for i, l in enumerate(self.lines):
if l == job:
# if no leading ansible header, insert one
if not re.match(r'%s' % self.ansible, self.lines[i - 1]):
self.lines.insert(i, self.do_comment(name))
return [self.lines[i], l, True]
# if a leading blank ansible header AND job has a name, update header
elif name and self.lines[i - 1] == self.do_comment(None):
self.lines[i - 1] = self.do_comment(name)
return [self.lines[i - 1], l, True]
return []
def find_env(self, name):
for index, l in enumerate(self.lines):
if re.match(r'^%s=' % name, l):
return [index, l]
return []
def get_cron_job(self, minute, hour, day, month, weekday, job, special, disabled):
# normalize any leading/trailing newlines (ansible/ansible-modules-core#3791)
job = job.strip('\r\n')
if disabled:
disable_prefix = '#'
else:
disable_prefix = ''
if special:
if self.cron_file:
return "%s@%s %s %s" % (disable_prefix, special, self.user, job)
else:
return "%s@%s %s" % (disable_prefix, special, job)
else:
if self.cron_file:
return "%s%s %s %s %s %s %s %s" % (disable_prefix, minute, hour, day, month, weekday, self.user, job)
else:
return "%s%s %s %s %s %s %s" % (disable_prefix, minute, hour, day, month, weekday, job)
def get_jobnames(self):
jobnames = []
for l in self.lines:
if re.match(r'%s' % self.ansible, l):
jobnames.append(re.sub(r'%s' % self.ansible, '', l))
return jobnames
def get_envnames(self):
envnames = []
for l in self.lines:
if re.match(r'^\S+=', l):
envnames.append(l.split('=')[0])
return envnames
def _update_job(self, name, job, addlinesfunction):
ansiblename = self.do_comment(name)
newlines = []
comment = None
for l in self.lines:
if comment is not None:
addlinesfunction(newlines, comment, job)
comment = None
elif l == ansiblename:
comment = l
else:
newlines.append(l)
self.lines = newlines
if len(newlines) == 0:
return True
else:
return False # TODO add some more error testing
def _update_env(self, name, decl, addenvfunction):
newlines = []
for l in self.lines:
if re.match(r'^%s=' % name, l):
addenvfunction(newlines, decl)
else:
newlines.append(l)
self.lines = newlines
def render(self):
"""
Render this crontab as it would be in the crontab.
"""
crons = []
for cron in self.lines:
crons.append(cron)
result = '\n'.join(crons)
if result:
result = result.rstrip('\r\n') + '\n'
return result
def _read_user_execute(self):
"""
Returns the command line for reading a crontab
"""
user = ''
if self.user:
if platform.system() == 'SunOS':
return "su %s -c '%s -l'" % (shlex_quote(self.user), shlex_quote(self.cron_cmd))
elif platform.system() == 'AIX':
return "%s -l %s" % (shlex_quote(self.cron_cmd), shlex_quote(self.user))
elif platform.system() == 'HP-UX':
return "%s %s %s" % (self.cron_cmd, '-l', shlex_quote(self.user))
elif pwd.getpwuid(os.getuid())[0] != self.user:
user = '-u %s' % shlex_quote(self.user)
return "%s %s %s" % (self.cron_cmd, user, '-l')
def _write_execute(self, path):
"""
Return the command line for writing a crontab
"""
user = ''
if self.user:
if platform.system() in ['SunOS', 'HP-UX', 'AIX']:
return "chown %s %s ; su '%s' -c '%s %s'" % (
shlex_quote(self.user), shlex_quote(path), shlex_quote(self.user), self.cron_cmd, shlex_quote(path))
elif pwd.getpwuid(os.getuid())[0] != self.user:
user = '-u %s' % shlex_quote(self.user)
return "%s %s %s" % (self.cron_cmd, user, shlex_quote(path))
def main():
# The following example playbooks:
#
# - cron: name="check dirs" hour="5,2" job="ls -alh > /dev/null"
#
# - name: do the job
# cron: name="do the job" hour="5,2" job="/some/dir/job.sh"
#
# - name: no job
# cron: name="an old job" state=absent
#
# - name: sets env
# cron: name="PATH" env=yes value="/bin:/usr/bin"
#
# Would produce:
# PATH=/bin:/usr/bin
# # Ansible: check dirs
# * * 5,2 * * ls -alh > /dev/null
# # Ansible: do the job
# * * 5,2 * * /some/dir/job.sh
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str'),
user=dict(type='str'),
job=dict(type='str', aliases=['value']),
cron_file=dict(type='str'),
state=dict(type='str', default='present', choices=['present', 'absent']),
backup=dict(type='bool', default=False),
minute=dict(type='str', default='*'),
hour=dict(type='str', default='*'),
day=dict(type='str', default='*', aliases=['dom']),
month=dict(type='str', default='*'),
weekday=dict(type='str', default='*', aliases=['dow']),
reboot=dict(type='bool', default=False),
special_time=dict(type='str', choices=["reboot", "yearly", "annually", "monthly", "weekly", "daily", "hourly"]),
disabled=dict(type='bool', default=False),
env=dict(type='bool'),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
),
supports_check_mode=True,
mutually_exclusive=[
['reboot', 'special_time'],
['insertafter', 'insertbefore'],
],
)
name = module.params['name']
user = module.params['user']
job = module.params['job']
cron_file = module.params['cron_file']
state = module.params['state']
backup = module.params['backup']
minute = module.params['minute']
hour = module.params['hour']
day = module.params['day']
month = module.params['month']
weekday = module.params['weekday']
reboot = module.params['reboot']
special_time = module.params['special_time']
disabled = module.params['disabled']
env = module.params['env']
insertafter = module.params['insertafter']
insertbefore = module.params['insertbefore']
do_install = state == 'present'
changed = False
res_args = dict()
warnings = list()
if cron_file:
cron_file_basename = os.path.basename(cron_file)
if not re.search(r'^[A-Z0-9_-]+$', cron_file_basename, re.I):
warnings.append('Filename portion of cron_file ("%s") should consist' % cron_file_basename +
' solely of upper- and lower-case letters, digits, underscores, and hyphens')
# Ensure all files generated are only writable by the owning user. Primarily relevant for the cron_file option.
os.umask(int('022', 8))
crontab = CronTab(module, user, cron_file)
module.debug('cron instantiated - name: "%s"' % name)
if not name:
module.deprecate(
msg="The 'name' parameter will be required in future releases.",
version='2.12', collection_name='ansible.builtin'
)
if reboot:
module.deprecate(
msg="The 'reboot' parameter will be removed in future releases. Use 'special_time' option instead.",
version='2.12', collection_name='ansible.builtin'
)
if module._diff:
diff = dict()
diff['before'] = crontab.n_existing
if crontab.cron_file:
diff['before_header'] = crontab.cron_file
else:
if crontab.user:
diff['before_header'] = 'crontab for user "%s"' % crontab.user
else:
diff['before_header'] = 'crontab'
# --- user input validation ---
if env and not name:
module.fail_json(msg="You must specify 'name' while working with environment variables (env=yes)")
if (special_time or reboot) and \
(True in [(x != '*') for x in [minute, hour, day, month, weekday]]):
module.fail_json(msg="You must specify time and date fields or special time.")
# cannot support special_time on solaris
if (special_time or reboot) and platform.system() == 'SunOS':
module.fail_json(msg="Solaris does not support special_time=... or @reboot")
if cron_file and do_install:
if not user:
module.fail_json(msg="To use cron_file=... parameter you must specify user=... as well")
if job is None and do_install:
module.fail_json(msg="You must specify 'job' to install a new cron job or variable")
if (insertafter or insertbefore) and not env and do_install:
module.fail_json(msg="Insertafter and insertbefore parameters are valid only with env=yes")
if reboot:
special_time = "reboot"
# if requested make a backup before making a change
if backup and not module.check_mode:
(backuph, backup_file) = tempfile.mkstemp(prefix='crontab')
crontab.write(backup_file)
if crontab.cron_file and not do_install:
if module._diff:
diff['after'] = ''
diff['after_header'] = '/dev/null'
else:
diff = dict()
if module.check_mode:
changed = os.path.isfile(crontab.cron_file)
else:
changed = crontab.remove_job_file()
module.exit_json(changed=changed, cron_file=cron_file, state=state, diff=diff)
if env:
if ' ' in name:
module.fail_json(msg="Invalid name for environment variable")
decl = '%s="%s"' % (name, job)
old_decl = crontab.find_env(name)
if do_install:
if len(old_decl) == 0:
crontab.add_env(decl, insertafter, insertbefore)
changed = True
if len(old_decl) > 0 and old_decl[1] != decl:
crontab.update_env(name, decl)
changed = True
else:
if len(old_decl) > 0:
crontab.remove_env(name)
changed = True
else:
if do_install:
for char in ['\r', '\n']:
if char in job.strip('\r\n'):
warnings.append('Job should not contain line breaks')
break
job = crontab.get_cron_job(minute, hour, day, month, weekday, job, special_time, disabled)
old_job = crontab.find_job(name, job)
if len(old_job) == 0:
crontab.add_job(name, job)
changed = True
if len(old_job) > 0 and old_job[1] != job:
crontab.update_job(name, job)
changed = True
if len(old_job) > 2:
crontab.update_job(name, job)
changed = True
else:
old_job = crontab.find_job(name)
if len(old_job) > 0:
crontab.remove_job(name)
changed = True
# no changes to env/job, but existing crontab needs a terminating newline
if not changed and crontab.n_existing != '':
if not (crontab.n_existing.endswith('\r') or crontab.n_existing.endswith('\n')):
changed = True
res_args = dict(
jobs=crontab.get_jobnames(),
envs=crontab.get_envnames(),
warnings=warnings,
changed=changed
)
if changed:
if not module.check_mode:
crontab.write()
if module._diff:
diff['after'] = crontab.render()
if crontab.cron_file:
diff['after_header'] = crontab.cron_file
else:
if crontab.user:
diff['after_header'] = 'crontab for user "%s"' % crontab.user
else:
diff['after_header'] = 'crontab'
res_args['diff'] = diff
# retain the backup only if crontab or cron file have changed
if backup and not module.check_mode:
if changed:
res_args['backup_file'] = backup_file
else:
os.unlink(backup_file)
if cron_file:
res_args['cron_file'] = cron_file
module.exit_json(**res_args)
# --- should never get here
module.exit_json(msg="Unable to execute cron task.")
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
lib/ansible/modules/debconf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Brian Coca <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: debconf
short_description: Configure a .deb package
description:
- Configure a .deb package using debconf-set-selections.
- Or just query existing selections.
version_added: "1.6"
notes:
- This module requires the command line debconf tools.
- A number of questions have to be answered (depending on the package).
Use 'debconf-show <package>' on any Debian or derivative with the package
installed to see questions/settings available.
- Some distros will always record tasks involving the setting of passwords as changed. This is due to debconf-get-selections masking passwords.
- It is highly recommended to add I(no_log=True) to task while handling sensitive information using this module.
- Supports C(check_mode).
requirements:
- debconf
- debconf-utils
options:
name:
description:
- Name of package to configure.
type: str
required: true
aliases: [ pkg ]
question:
description:
- A debconf configuration setting.
type: str
aliases: [ selection, setting ]
vtype:
description:
- The type of the value supplied.
- It is highly recommended to add I(no_log=True) to task while specifying I(vtype=password).
- C(seen) was added in Ansible 2.2.
type: str
choices: [ boolean, error, multiselect, note, password, seen, select, string, text, title ]
value:
description:
- Value to set the configuration to.
type: str
aliases: [ answer ]
unseen:
description:
- Do not set 'seen' flag when pre-seeding.
type: bool
default: no
author:
- Brian Coca (@bcoca)
'''
EXAMPLES = r'''
- name: Set default locale to fr_FR.UTF-8
ansible.builtin.debconf:
name: locales
question: locales/default_environment_locale
value: fr_FR.UTF-8
vtype: select
- name: Set to generate locales
ansible.builtin.debconf:
name: locales
question: locales/locales_to_be_generated
value: en_US.UTF-8 UTF-8, fr_FR.UTF-8 UTF-8
vtype: multiselect
- name: Accept oracle license
ansible.builtin.debconf:
name: oracle-java7-installer
question: shared/accepted-oracle-license-v1-1
value: 'true'
vtype: select
- name: Specifying package you can register/return the list of questions and current values
ansible.builtin.debconf:
name: tzdata
- name: Pre-configure tripwire site passphrase
ansible.builtin.debconf:
name: tripwire
question: tripwire/site-passphrase
value: "{{ site_passphrase }}"
vtype: password
no_log: True
'''
RETURN = r'''#'''
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import AnsibleModule
def get_selections(module, pkg):
cmd = [module.get_bin_path('debconf-show', True), pkg]
rc, out, err = module.run_command(' '.join(cmd))
if rc != 0:
module.fail_json(msg=err)
selections = {}
for line in out.splitlines():
(key, value) = line.split(':', 1)
selections[key.strip('*').strip()] = value.strip()
return selections
def set_selection(module, pkg, question, vtype, value, unseen):
setsel = module.get_bin_path('debconf-set-selections', True)
cmd = [setsel]
if unseen:
cmd.append('-u')
if vtype == 'boolean':
if value == 'True':
value = 'true'
elif value == 'False':
value = 'false'
data = ' '.join([pkg, question, vtype, value])
return module.run_command(cmd, data=data)
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True, aliases=['pkg']),
question=dict(type='str', aliases=['selection', 'setting']),
vtype=dict(type='str', choices=['boolean', 'error', 'multiselect', 'note', 'password', 'seen', 'select', 'string', 'text', 'title']),
value=dict(type='str', aliases=['answer']),
unseen=dict(type='bool'),
),
required_together=(['question', 'vtype', 'value'],),
supports_check_mode=True,
)
# TODO: enable passing array of options and/or debconf file from get-selections dump
pkg = module.params["name"]
question = module.params["question"]
vtype = module.params["vtype"]
value = module.params["value"]
unseen = module.params["unseen"]
prev = get_selections(module, pkg)
changed = False
msg = ""
if question is not None:
if vtype is None or value is None:
module.fail_json(msg="when supplying a question you must supply a valid vtype and value")
# if question doesn't exist, value cannot match
if question not in prev:
changed = True
else:
existing = prev[question]
# ensure we compare booleans supplied to the way debconf sees them (true/false strings)
if vtype == 'boolean':
value = to_text(value).lower()
existing = to_text(prev[question]).lower()
if value != existing:
changed = True
if changed:
if not module.check_mode:
rc, msg, e = set_selection(module, pkg, question, vtype, value, unseen)
if rc:
module.fail_json(msg=e)
curr = {question: value}
if question in prev:
prev = {question: prev[question]}
else:
prev[question] = ''
if module._diff:
after = prev.copy()
after.update(curr)
diff_dict = {'before': prev, 'after': after}
else:
diff_dict = {}
module.exit_json(changed=changed, msg=msg, current=curr, previous=prev, diff=diff_dict)
module.exit_json(changed=changed, msg=msg, current=prev)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
lib/ansible/modules/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated and will be removed in version 2.14. Use
option C(checksum) instead.
default: ''
type: str
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and has been removed in version 2.10.
type: dict
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
if module.check_mode:
method = 'HEAD'
else:
method = 'GET'
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
def is_url(checksum):
"""
Returns True if checksum value has supported URL scheme, else False."""
supported_schemes = ('http', 'https', 'ftp', 'file')
return urlsplit(checksum).scheme in supported_schemes
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool'),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='dict'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead',
version='2.13', collection_name='ansible.builtin')
if module.params.get('sha256sum'):
module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead',
version='2.14', collection_name='ansible.builtin')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
headers = module.params['headers']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if is_url(checksum):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = []
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map.append((parts[0], parts[1]))
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
lib/ansible/modules/iptables.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Linus Unnebäck <[email protected]>
# Copyright: (c) 2017, Sébastien DA ROCHA <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: iptables
short_description: Modify iptables rules
version_added: "2.0"
author:
- Linus Unnebäck (@LinusU) <[email protected]>
- Sébastien DA ROCHA (@sebastiendarocha)
description:
- C(iptables) is used to set up, maintain, and inspect the tables of IP packet
filter rules in the Linux kernel.
- This module does not handle the saving and/or loading of rules, but rather
only manipulates the current rules that are present in memory. This is the
same as the behaviour of the C(iptables) and C(ip6tables) command which
this module uses internally.
notes:
- This module just deals with individual rules.If you need advanced
chaining of rules the recommended way is to template the iptables restore
file.
options:
table:
description:
- This option specifies the packet matching table which the command should operate on.
- If the kernel is configured with automatic module loading, an attempt will be made
to load the appropriate module for that table if it is not already there.
type: str
choices: [ filter, nat, mangle, raw, security ]
default: filter
state:
description:
- Whether the rule should be absent or present.
type: str
choices: [ absent, present ]
default: present
action:
description:
- Whether the rule should be appended at the bottom or inserted at the top.
- If the rule already exists the chain will not be modified.
type: str
choices: [ append, insert ]
default: append
version_added: "2.2"
rule_num:
description:
- Insert the rule as the given rule number.
- This works only with C(action=insert).
type: str
version_added: "2.5"
ip_version:
description:
- Which version of the IP protocol this rule should apply to.
type: str
choices: [ ipv4, ipv6 ]
default: ipv4
chain:
description:
- Specify the iptables chain to modify.
- This could be a user-defined chain or one of the standard iptables chains, like
C(INPUT), C(FORWARD), C(OUTPUT), C(PREROUTING), C(POSTROUTING), C(SECMARK) or C(CONNSECMARK).
type: str
protocol:
description:
- The protocol of the rule or of the packet to check.
- The specified protocol can be one of C(tcp), C(udp), C(udplite), C(icmp), C(ipv6-icmp) or C(icmpv6),
C(esp), C(ah), C(sctp) or the special keyword C(all), or it can be a numeric value,
representing one of these protocols or a different one.
- A protocol name from I(/etc/protocols) is also allowed.
- A C(!) argument before the protocol inverts the test.
- The number zero is equivalent to all.
- C(all) will match with all protocols and is taken as default when this option is omitted.
type: str
source:
description:
- Source specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
destination:
description:
- Destination specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
tcp_flags:
description:
- TCP flags specification.
- C(tcp_flags) expects a dict with the two keys C(flags) and C(flags_set).
type: dict
default: {}
version_added: "2.4"
suboptions:
flags:
description:
- List of flags you want to examine.
type: list
elements: str
flags_set:
description:
- Flags to be set.
type: list
elements: str
match:
description:
- Specifies a match to use, that is, an extension module that tests for
a specific property.
- The set of matches make up the condition under which a target is invoked.
- Matches are evaluated first to last if specified as an array and work in short-circuit
fashion, i.e. if one extension yields false, evaluation will stop.
type: list
elements: str
default: []
jump:
description:
- This specifies the target of the rule; i.e., what to do if the packet matches it.
- The target can be a user-defined chain (other than the one
this rule is in), one of the special builtin targets which decide the
fate of the packet immediately, or an extension (see EXTENSIONS
below).
- If this option is omitted in a rule (and the goto parameter
is not used), then matching the rule will have no effect on the
packet's fate, but the counters on the rule will be incremented.
type: str
gateway:
description:
- This specifies the IP address of host to send the cloned packets.
- This option is only valid when C(jump) is set to C(TEE).
type: str
version_added: "2.8"
log_prefix:
description:
- Specifies a log text for the rule. Only make sense with a LOG jump.
type: str
version_added: "2.5"
log_level:
description:
- Logging level according to the syslogd-defined priorities.
- The value can be strings or numbers from 1-8.
- This parameter is only applicable if C(jump) is set to C(LOG).
type: str
version_added: "2.8"
choices: [ '0', '1', '2', '3', '4', '5', '6', '7', 'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug' ]
goto:
description:
- This specifies that the processing should continue in a user specified chain.
- Unlike the jump argument return will not continue processing in
this chain but instead in the chain that called us via jump.
type: str
in_interface:
description:
- Name of an interface via which a packet was received (only for packets
entering the C(INPUT), C(FORWARD) and C(PREROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins with
this name will match.
- If this option is omitted, any interface name will match.
type: str
out_interface:
description:
- Name of an interface via which a packet is going to be sent (for
packets entering the C(FORWARD), C(OUTPUT) and C(POSTROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins
with this name will match.
- If this option is omitted, any interface name will match.
type: str
fragment:
description:
- This means that the rule only refers to second and further fragments
of fragmented packets.
- Since there is no way to tell the source or destination ports of such
a packet (or ICMP type), such a packet will not match any rules which specify them.
- When the "!" argument precedes fragment argument, the rule will only match head fragments,
or unfragmented packets.
type: str
set_counters:
description:
- This enables the administrator to initialize the packet and byte
counters of a rule (during C(INSERT), C(APPEND), C(REPLACE) operations).
type: str
source_port:
description:
- Source port or port range specification.
- This can either be a service name or a port number.
- An inclusive range can also be specified, using the format C(first:last).
- If the first port is omitted, C(0) is assumed; if the last is omitted, C(65535) is assumed.
- If the first port is greater than the second one they will be swapped.
type: str
destination_port:
description:
- "Destination port or port range specification. This can either be
a service name or a port number. An inclusive range can also be
specified, using the format first:last. If the first port is omitted,
'0' is assumed; if the last is omitted, '65535' is assumed. If the
first port is greater than the second one they will be swapped.
This is only valid if the rule also specifies one of the following
protocols: tcp, udp, dccp or sctp."
type: str
to_ports:
description:
- This specifies a destination port or range of ports to use, without
this, the destination port is never altered.
- This is only valid if the rule also specifies one of the protocol
C(tcp), C(udp), C(dccp) or C(sctp).
type: str
to_destination:
description:
- This specifies a destination address to use with C(DNAT).
- Without this, the destination address is never altered.
type: str
version_added: "2.1"
to_source:
description:
- This specifies a source address to use with C(SNAT).
- Without this, the source address is never altered.
type: str
version_added: "2.2"
syn:
description:
- This allows matching packets that have the SYN bit set and the ACK
and RST bits unset.
- When negated, this matches all packets with the RST or the ACK bits set.
type: str
choices: [ ignore, match, negate ]
default: ignore
version_added: "2.5"
set_dscp_mark:
description:
- This allows specifying a DSCP mark to be added to packets.
It takes either an integer or hex value.
- Mutually exclusive with C(set_dscp_mark_class).
type: str
version_added: "2.1"
set_dscp_mark_class:
description:
- This allows specifying a predefined DiffServ class which will be
translated to the corresponding DSCP mark.
- Mutually exclusive with C(set_dscp_mark).
type: str
version_added: "2.1"
comment:
description:
- This specifies a comment that will be added to the rule.
type: str
ctstate:
description:
- A list of the connection states to match in the conntrack module.
- Possible values are C(INVALID), C(NEW), C(ESTABLISHED), C(RELATED), C(UNTRACKED), C(SNAT), C(DNAT).
type: list
elements: str
default: []
src_range:
description:
- Specifies the source IP range to match in the iprange module.
type: str
version_added: "2.8"
dst_range:
description:
- Specifies the destination IP range to match in the iprange module.
type: str
version_added: "2.8"
limit:
description:
- Specifies the maximum average number of matches to allow per second.
- The number can specify units explicitly, using `/second', `/minute',
`/hour' or `/day', or parts of them (so `5/second' is the same as
`5/s').
type: str
limit_burst:
description:
- Specifies the maximum burst before the above limit kicks in.
type: str
version_added: "2.1"
uid_owner:
description:
- Specifies the UID or username to use in match by owner rule.
- From Ansible 2.6 when the C(!) argument is prepended then the it inverts
the rule to apply instead to all users except that one specified.
type: str
version_added: "2.1"
gid_owner:
description:
- Specifies the GID or group to use in match by owner rule.
type: str
version_added: "2.9"
reject_with:
description:
- 'Specifies the error packet type to return while rejecting. It implies
"jump: REJECT".'
type: str
version_added: "2.1"
icmp_type:
description:
- This allows specification of the ICMP type, which can be a numeric
ICMP type, type/code pair, or one of the ICMP type names shown by the
command 'iptables -p icmp -h'
type: str
version_added: "2.2"
flush:
description:
- Flushes the specified table and chain of all rules.
- If no chain is specified then the entire table is purged.
- Ignores all other parameters.
type: bool
version_added: "2.2"
policy:
description:
- Set the policy for the chain to the given target.
- Only built-in chains can have policies.
- This parameter requires the C(chain) parameter.
- Ignores all other parameters.
type: str
choices: [ ACCEPT, DROP, QUEUE, RETURN ]
version_added: "2.2"
wait:
description:
- Wait N seconds for the xtables lock to prevent multiple instances of
the program from running concurrently.
type: str
version_added: "2.10"
'''
EXAMPLES = r'''
- name: Block specific IP
ansible.builtin.iptables:
chain: INPUT
source: 8.8.8.8
jump: DROP
become: yes
- name: Forward port 80 to 8600
ansible.builtin.iptables:
table: nat
chain: PREROUTING
in_interface: eth0
protocol: tcp
match: tcp
destination_port: 80
jump: REDIRECT
to_ports: 8600
comment: Redirect web traffic to port 8600
become: yes
- name: Allow related and established connections
ansible.builtin.iptables:
chain: INPUT
ctstate: ESTABLISHED,RELATED
jump: ACCEPT
become: yes
- name: Allow new incoming SYN packets on TCP port 22 (SSH)
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 22
ctstate: NEW
syn: match
jump: ACCEPT
comment: Accept new SSH connections.
- name: Match on IP ranges
ansible.builtin.iptables:
chain: FORWARD
src_range: 192.168.1.100-192.168.1.199
dst_range: 10.0.0.1-10.0.0.50
jump: ACCEPT
- name: Tag all outbound tcp packets with DSCP mark 8
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark: 8
protocol: tcp
- name: Tag all outbound tcp packets with DSCP DiffServ class CS1
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark_class: CS1
protocol: tcp
- name: Insert a rule on line 5
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 8080
jump: ACCEPT
action: insert
rule_num: 5
- name: Set the policy for the INPUT chain to DROP
ansible.builtin.iptables:
chain: INPUT
policy: DROP
- name: Reject tcp with tcp-reset
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
reject_with: tcp-reset
ip_version: ipv4
- name: Set tcp flags
ansible.builtin.iptables:
chain: OUTPUT
jump: DROP
protocol: tcp
tcp_flags:
flags: ALL
flags_set:
- ACK
- RST
- SYN
- FIN
- name: Iptables flush filter
ansible.builtin.iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: Iptables flush nat
ansible.builtin.iptables:
table: nat
chain: '{{ item }}'
flush: yes
with_items: [ 'INPUT', 'OUTPUT', 'PREROUTING', 'POSTROUTING' ]
- name: Log packets arriving into an user-defined chain
ansible.builtin.iptables:
chain: LOGGING
action: append
state: present
limit: 2/second
limit_burst: 20
log_prefix: "IPTABLES:INFO: "
log_level: info
'''
import re
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
IPTABLES_WAIT_SUPPORT_ADDED = '1.4.20'
IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED = '1.6.0'
BINS = dict(
ipv4='iptables',
ipv6='ip6tables',
)
ICMP_TYPE_OPTIONS = dict(
ipv4='--icmp-type',
ipv6='--icmpv6-type',
)
def append_param(rule, param, flag, is_list):
if is_list:
for item in param:
append_param(rule, item, flag, False)
else:
if param is not None:
if param[0] == '!':
rule.extend(['!', flag, param[1:]])
else:
rule.extend([flag, param])
def append_tcp_flags(rule, param, flag):
if param:
if 'flags' in param and 'flags_set' in param:
rule.extend([flag, ','.join(param['flags']), ','.join(param['flags_set'])])
def append_match_flag(rule, param, flag, negatable):
if param == 'match':
rule.extend([flag])
elif negatable and param == 'negate':
rule.extend(['!', flag])
def append_csv(rule, param, flag):
if param:
rule.extend([flag, ','.join(param)])
def append_match(rule, param, match):
if param:
rule.extend(['-m', match])
def append_jump(rule, param, jump):
if param:
rule.extend(['-j', jump])
def append_wait(rule, param, flag):
if param:
rule.extend([flag, param])
def construct_rule(params):
rule = []
append_wait(rule, params['wait'], '-w')
append_param(rule, params['protocol'], '-p', False)
append_param(rule, params['source'], '-s', False)
append_param(rule, params['destination'], '-d', False)
append_param(rule, params['match'], '-m', True)
append_tcp_flags(rule, params['tcp_flags'], '--tcp-flags')
append_param(rule, params['jump'], '-j', False)
if params.get('jump') and params['jump'].lower() == 'tee':
append_param(rule, params['gateway'], '--gateway', False)
append_param(rule, params['log_prefix'], '--log-prefix', False)
append_param(rule, params['log_level'], '--log-level', False)
append_param(rule, params['to_destination'], '--to-destination', False)
append_param(rule, params['to_source'], '--to-source', False)
append_param(rule, params['goto'], '-g', False)
append_param(rule, params['in_interface'], '-i', False)
append_param(rule, params['out_interface'], '-o', False)
append_param(rule, params['fragment'], '-f', False)
append_param(rule, params['set_counters'], '-c', False)
append_param(rule, params['source_port'], '--source-port', False)
append_param(rule, params['destination_port'], '--destination-port', False)
append_param(rule, params['to_ports'], '--to-ports', False)
append_param(rule, params['set_dscp_mark'], '--set-dscp', False)
append_param(
rule,
params['set_dscp_mark_class'],
'--set-dscp-class',
False)
append_match_flag(rule, params['syn'], '--syn', True)
if 'conntrack' in params['match']:
append_csv(rule, params['ctstate'], '--ctstate')
elif 'state' in params['match']:
append_csv(rule, params['ctstate'], '--state')
elif params['ctstate']:
append_match(rule, params['ctstate'], 'conntrack')
append_csv(rule, params['ctstate'], '--ctstate')
if 'iprange' in params['match']:
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
elif params['src_range'] or params['dst_range']:
append_match(rule, params['src_range'] or params['dst_range'], 'iprange')
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
append_match(rule, params['limit'] or params['limit_burst'], 'limit')
append_param(rule, params['limit'], '--limit', False)
append_param(rule, params['limit_burst'], '--limit-burst', False)
append_match(rule, params['uid_owner'], 'owner')
append_match_flag(rule, params['uid_owner'], '--uid-owner', True)
append_param(rule, params['uid_owner'], '--uid-owner', False)
append_match(rule, params['gid_owner'], 'owner')
append_match_flag(rule, params['gid_owner'], '--gid-owner', True)
append_param(rule, params['gid_owner'], '--gid-owner', False)
if params['jump'] is None:
append_jump(rule, params['reject_with'], 'REJECT')
append_param(rule, params['reject_with'], '--reject-with', False)
append_param(
rule,
params['icmp_type'],
ICMP_TYPE_OPTIONS[params['ip_version']],
False)
append_match(rule, params['comment'], 'comment')
append_param(rule, params['comment'], '--comment', False)
return rule
def push_arguments(iptables_path, action, params, make_rule=True):
cmd = [iptables_path]
cmd.extend(['-t', params['table']])
cmd.extend([action, params['chain']])
if action == '-I' and params['rule_num']:
cmd.extend([params['rule_num']])
if make_rule:
cmd.extend(construct_rule(params))
return cmd
def check_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-C', params)
rc, _, __ = module.run_command(cmd, check_rc=False)
return (rc == 0)
def append_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-A', params)
module.run_command(cmd, check_rc=True)
def insert_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-I', params)
module.run_command(cmd, check_rc=True)
def remove_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-D', params)
module.run_command(cmd, check_rc=True)
def flush_table(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-F', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def set_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-P', params, make_rule=False)
cmd.append(params['policy'])
module.run_command(cmd, check_rc=True)
def get_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params)
rc, out, _ = module.run_command(cmd, check_rc=True)
chain_header = out.split("\n")[0]
result = re.search(r'\(policy ([A-Z]+)\)', chain_header)
if result:
return result.group(1)
return None
def get_iptables_version(iptables_path, module):
cmd = [iptables_path, '--version']
rc, out, _ = module.run_command(cmd, check_rc=True)
return out.split('v')[1].rstrip('\n')
def main():
module = AnsibleModule(
supports_check_mode=True,
argument_spec=dict(
table=dict(type='str', default='filter', choices=['filter', 'nat', 'mangle', 'raw', 'security']),
state=dict(type='str', default='present', choices=['absent', 'present']),
action=dict(type='str', default='append', choices=['append', 'insert']),
ip_version=dict(type='str', default='ipv4', choices=['ipv4', 'ipv6']),
chain=dict(type='str'),
rule_num=dict(type='str'),
protocol=dict(type='str'),
wait=dict(type='str'),
source=dict(type='str'),
to_source=dict(type='str'),
destination=dict(type='str'),
to_destination=dict(type='str'),
match=dict(type='list', elements='str', default=[]),
tcp_flags=dict(type='dict',
options=dict(
flags=dict(type='list', elements='str'),
flags_set=dict(type='list', elements='str'))
),
jump=dict(type='str'),
gateway=dict(type='str'),
log_prefix=dict(type='str'),
log_level=dict(type='str',
choices=['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error',
'warning', 'notice', 'info', 'debug'],
default=None,
),
goto=dict(type='str'),
in_interface=dict(type='str'),
out_interface=dict(type='str'),
fragment=dict(type='str'),
set_counters=dict(type='str'),
source_port=dict(type='str'),
destination_port=dict(type='str'),
to_ports=dict(type='str'),
set_dscp_mark=dict(type='str'),
set_dscp_mark_class=dict(type='str'),
comment=dict(type='str'),
ctstate=dict(type='list', elements='str', default=[]),
src_range=dict(type='str'),
dst_range=dict(type='str'),
limit=dict(type='str'),
limit_burst=dict(type='str'),
uid_owner=dict(type='str'),
gid_owner=dict(type='str'),
reject_with=dict(type='str'),
icmp_type=dict(type='str'),
syn=dict(type='str', default='ignore', choices=['ignore', 'match', 'negate']),
flush=dict(type='bool', default=False),
policy=dict(type='str', choices=['ACCEPT', 'DROP', 'QUEUE', 'RETURN']),
),
mutually_exclusive=(
['set_dscp_mark', 'set_dscp_mark_class'],
['flush', 'policy'],
),
required_if=[
['jump', 'TEE', ['gateway']],
['jump', 'tee', ['gateway']],
]
)
args = dict(
changed=False,
failed=False,
ip_version=module.params['ip_version'],
table=module.params['table'],
chain=module.params['chain'],
flush=module.params['flush'],
rule=' '.join(construct_rule(module.params)),
state=module.params['state'],
)
ip_version = module.params['ip_version']
iptables_path = module.get_bin_path(BINS[ip_version], True)
# Check if chain option is required
if args['flush'] is False and args['chain'] is None:
module.fail_json(msg="Either chain or flush parameter must be specified.")
if module.params.get('log_prefix', None) or module.params.get('log_level', None):
if module.params['jump'] is None:
module.params['jump'] = 'LOG'
elif module.params['jump'] != 'LOG':
module.fail_json(msg="Logging options can only be used with the LOG jump target.")
# Check if wait option is supported
iptables_version = LooseVersion(get_iptables_version(iptables_path, module))
if iptables_version >= LooseVersion(IPTABLES_WAIT_SUPPORT_ADDED):
if iptables_version < LooseVersion(IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED):
module.params['wait'] = ''
else:
module.params['wait'] = None
# Flush the table
if args['flush'] is True:
args['changed'] = True
if not module.check_mode:
flush_table(iptables_path, module, module.params)
# Set the policy
elif module.params['policy']:
current_policy = get_chain_policy(iptables_path, module, module.params)
if not current_policy:
module.fail_json(msg='Can\'t detect current policy')
changed = current_policy != module.params['policy']
args['changed'] = changed
if changed and not module.check_mode:
set_chain_policy(iptables_path, module, module.params)
else:
insert = (module.params['action'] == 'insert')
rule_is_present = check_present(iptables_path, module, module.params)
should_be_present = (args['state'] == 'present')
# Check if target is up to date
args['changed'] = (rule_is_present != should_be_present)
if args['changed'] is False:
# Target is already up to date
module.exit_json(**args)
# Check only; don't modify
if not module.check_mode:
if should_be_present:
if insert:
insert_rule(iptables_path, module, module.params)
else:
append_rule(iptables_path, module, module.params)
else:
remove_rule(iptables_path, module, module.params)
module.exit_json(**args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
test/lib/ansible_test/_data/sanity/validate-modules/validate_modules/main.py
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2015 Matt Martz <[email protected]>
# Copyright (C) 2015 Rackspace US, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import abc
import argparse
import ast
import datetime
import json
import errno
import os
import re
import subprocess
import sys
import tempfile
import traceback
from collections import OrderedDict
from contextlib import contextmanager
from distutils.version import StrictVersion, LooseVersion
from fnmatch import fnmatch
import yaml
from ansible import __version__ as ansible_version
from ansible.executor.module_common import REPLACER_WINDOWS
from ansible.module_utils.common._collections_compat import Mapping
from ansible.plugins.loader import fragment_loader
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
from ansible.utils.plugin_docs import REJECTLIST, add_collection_to_versions_and_dates, add_fragments, get_docstring
from ansible.utils.version import SemanticVersion
from .module_args import AnsibleModuleImportError, AnsibleModuleNotInitialized, get_argument_spec
from .schema import ansible_module_kwargs_schema, doc_schema, return_schema
from .utils import CaptureStd, NoArgsAnsibleModule, compare_unordered_lists, is_empty, parse_yaml, parse_isodate
from voluptuous.humanize import humanize_error
from ansible.module_utils.six import PY3, with_metaclass, string_types
if PY3:
# Because there is no ast.TryExcept in Python 3 ast module
TRY_EXCEPT = ast.Try
# REPLACER_WINDOWS from ansible.executor.module_common is byte
# string but we need unicode for Python 3
REPLACER_WINDOWS = REPLACER_WINDOWS.decode('utf-8')
else:
TRY_EXCEPT = ast.TryExcept
REJECTLIST_DIRS = frozenset(('.git', 'test', '.github', '.idea'))
INDENT_REGEX = re.compile(r'([\t]*)')
TYPE_REGEX = re.compile(r'.*(if|or)(\s+[^"\']*|\s+)(?<!_)(?<!str\()type\([^)].*')
SYS_EXIT_REGEX = re.compile(r'[^#]*sys.exit\s*\(.*')
REJECTLIST_IMPORTS = {
'requests': {
'new_only': True,
'error': {
'code': 'use-module-utils-urls',
'msg': ('requests import found, should use '
'ansible.module_utils.urls instead')
}
},
r'boto(?:\.|$)': {
'new_only': True,
'error': {
'code': 'use-boto3',
'msg': 'boto import found, new modules should use boto3'
}
},
}
SUBPROCESS_REGEX = re.compile(r'subprocess\.Po.*')
OS_CALL_REGEX = re.compile(r'os\.call.*')
LOOSE_ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version.split('.')[:3]))
def compare_dates(d1, d2):
try:
date1 = parse_isodate(d1, allow_date=True)
date2 = parse_isodate(d2, allow_date=True)
return date1 == date2
except ValueError:
# At least one of d1 and d2 cannot be parsed. Simply compare values.
return d1 == d2
class ReporterEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, Exception):
return str(o)
return json.JSONEncoder.default(self, o)
class Reporter:
def __init__(self):
self.files = OrderedDict()
def _ensure_default_entry(self, path):
try:
self.files[path]
except KeyError:
self.files[path] = {
'errors': [],
'warnings': [],
'traces': [],
'warning_traces': []
}
def _log(self, path, code, msg, level='error', line=0, column=0):
self._ensure_default_entry(path)
lvl_dct = self.files[path]['%ss' % level]
lvl_dct.append({
'code': code,
'msg': msg,
'line': line,
'column': column
})
def error(self, *args, **kwargs):
self._log(*args, level='error', **kwargs)
def warning(self, *args, **kwargs):
self._log(*args, level='warning', **kwargs)
def trace(self, path, tracebk):
self._ensure_default_entry(path)
self.files[path]['traces'].append(tracebk)
def warning_trace(self, path, tracebk):
self._ensure_default_entry(path)
self.files[path]['warning_traces'].append(tracebk)
@staticmethod
@contextmanager
def _output_handle(output):
if output != '-':
handle = open(output, 'w+')
else:
handle = sys.stdout
yield handle
handle.flush()
handle.close()
@staticmethod
def _filter_out_ok(reports):
temp_reports = OrderedDict()
for path, report in reports.items():
if report['errors'] or report['warnings']:
temp_reports[path] = report
return temp_reports
def plain(self, warnings=False, output='-'):
"""Print out the test results in plain format
output is ignored here for now
"""
ret = []
for path, report in Reporter._filter_out_ok(self.files).items():
traces = report['traces'][:]
if warnings and report['warnings']:
traces.extend(report['warning_traces'])
for trace in traces:
print('TRACE:')
print('\n '.join((' %s' % trace).splitlines()))
for error in report['errors']:
error['path'] = path
print('%(path)s:%(line)d:%(column)d: E%(code)s %(msg)s' % error)
ret.append(1)
if warnings:
for warning in report['warnings']:
warning['path'] = path
print('%(path)s:%(line)d:%(column)d: W%(code)s %(msg)s' % warning)
return 3 if ret else 0
def json(self, warnings=False, output='-'):
"""Print out the test results in json format
warnings is not respected in this output
"""
ret = [len(r['errors']) for r in self.files.values()]
with Reporter._output_handle(output) as handle:
print(json.dumps(Reporter._filter_out_ok(self.files), indent=4, cls=ReporterEncoder), file=handle)
return 3 if sum(ret) else 0
class Validator(with_metaclass(abc.ABCMeta, object)):
"""Validator instances are intended to be run on a single object. if you
are scanning multiple objects for problems, you'll want to have a separate
Validator for each one."""
def __init__(self, reporter=None):
self.reporter = reporter
@abc.abstractproperty
def object_name(self):
"""Name of the object we validated"""
pass
@abc.abstractproperty
def object_path(self):
"""Path of the object we validated"""
pass
@abc.abstractmethod
def validate(self):
"""Run this method to generate the test results"""
pass
class ModuleValidator(Validator):
REJECTLIST_PATTERNS = ('.git*', '*.pyc', '*.pyo', '.*', '*.md', '*.rst', '*.txt')
REJECTLIST_FILES = frozenset(('.git', '.gitignore', '.travis.yml',
'shippable.yml',
'.gitattributes', '.gitmodules', 'COPYING',
'__init__.py', 'VERSION', 'test-docs.sh'))
REJECTLIST = REJECTLIST_FILES.union(REJECTLIST['MODULE'])
PS_DOC_REJECTLIST = frozenset((
'async_status.ps1',
'slurp.ps1',
'setup.ps1'
))
# win_dsc is a dynamic arg spec, the docs won't ever match
PS_ARG_VALIDATE_REJECTLIST = frozenset(('win_dsc.ps1', ))
ACCEPTLIST_FUTURE_IMPORTS = frozenset(('absolute_import', 'division', 'print_function'))
def __init__(self, path, analyze_arg_spec=False, collection=None, collection_version=None,
base_branch=None, git_cache=None, reporter=None, routing=None):
super(ModuleValidator, self).__init__(reporter=reporter or Reporter())
self.path = path
self.basename = os.path.basename(self.path)
self.name = os.path.splitext(self.basename)[0]
self.analyze_arg_spec = analyze_arg_spec
self._Version = LooseVersion
self._StrictVersion = StrictVersion
self.collection = collection
self.collection_name = 'ansible.builtin'
if self.collection:
self._Version = SemanticVersion
self._StrictVersion = SemanticVersion
collection_namespace_path, collection_name = os.path.split(self.collection)
self.collection_name = '%s.%s' % (os.path.basename(collection_namespace_path), collection_name)
self.routing = routing
self.collection_version = None
if collection_version is not None:
self.collection_version_str = collection_version
self.collection_version = SemanticVersion(collection_version)
self.base_branch = base_branch
self.git_cache = git_cache or GitCache()
self._python_module_override = False
with open(path) as f:
self.text = f.read()
self.length = len(self.text.splitlines())
try:
self.ast = ast.parse(self.text)
except Exception:
self.ast = None
if base_branch:
self.base_module = self._get_base_file()
else:
self.base_module = None
def _create_version(self, v, collection_name=None):
if not v:
raise ValueError('Empty string is not a valid version')
if collection_name == 'ansible.builtin':
return LooseVersion(v)
if collection_name is not None:
return SemanticVersion(v)
return self._Version(v)
def _create_strict_version(self, v, collection_name=None):
if not v:
raise ValueError('Empty string is not a valid version')
if collection_name == 'ansible.builtin':
return StrictVersion(v)
if collection_name is not None:
return SemanticVersion(v)
return self._StrictVersion(v)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if not self.base_module:
return
try:
os.remove(self.base_module)
except Exception:
pass
@property
def object_name(self):
return self.basename
@property
def object_path(self):
return self.path
def _get_collection_meta(self):
"""Implement if we need this for version_added comparisons
"""
pass
def _python_module(self):
if self.path.endswith('.py') or self._python_module_override:
return True
return False
def _powershell_module(self):
if self.path.endswith('.ps1'):
return True
return False
def _just_docs(self):
"""Module can contain just docs and from __future__ boilerplate
"""
try:
for child in self.ast.body:
if not isinstance(child, ast.Assign):
# allowed from __future__ imports
if isinstance(child, ast.ImportFrom) and child.module == '__future__':
for future_import in child.names:
if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS:
break
else:
continue
return False
return True
except AttributeError:
return False
def _get_base_branch_module_path(self):
"""List all paths within lib/ansible/modules to try and match a moved module"""
return self.git_cache.base_module_paths.get(self.object_name)
def _has_alias(self):
"""Return true if the module has any aliases."""
return self.object_name in self.git_cache.head_aliased_modules
def _get_base_file(self):
# In case of module moves, look for the original location
base_path = self._get_base_branch_module_path()
command = ['git', 'show', '%s:%s' % (self.base_branch, base_path or self.path)]
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if int(p.returncode) != 0:
return None
t = tempfile.NamedTemporaryFile(delete=False)
t.write(stdout)
t.close()
return t.name
def _is_new_module(self):
if self._has_alias():
return False
return not self.object_name.startswith('_') and bool(self.base_branch) and not bool(self.base_module)
def _check_interpreter(self, powershell=False):
if powershell:
if not self.text.startswith('#!powershell\n'):
self.reporter.error(
path=self.object_path,
code='missing-powershell-interpreter',
msg='Interpreter line is not "#!powershell"'
)
return
if not self.text.startswith('#!/usr/bin/python'):
self.reporter.error(
path=self.object_path,
code='missing-python-interpreter',
msg='Interpreter line is not "#!/usr/bin/python"',
)
def _check_type_instead_of_isinstance(self, powershell=False):
if powershell:
return
for line_no, line in enumerate(self.text.splitlines()):
typekeyword = TYPE_REGEX.match(line)
if typekeyword:
# TODO: add column
self.reporter.error(
path=self.object_path,
code='unidiomatic-typecheck',
msg=('Type comparison using type() found. '
'Use isinstance() instead'),
line=line_no + 1
)
def _check_for_sys_exit(self):
# Optimize out the happy path
if 'sys.exit' not in self.text:
return
for line_no, line in enumerate(self.text.splitlines()):
sys_exit_usage = SYS_EXIT_REGEX.match(line)
if sys_exit_usage:
# TODO: add column
self.reporter.error(
path=self.object_path,
code='use-fail-json-not-sys-exit',
msg='sys.exit() call found. Should be exit_json/fail_json',
line=line_no + 1
)
def _check_gpl3_header(self):
header = '\n'.join(self.text.split('\n')[:20])
if ('GNU General Public License' not in header or
('version 3' not in header and 'v3.0' not in header)):
self.reporter.error(
path=self.object_path,
code='missing-gplv3-license',
msg='GPLv3 license header not found in the first 20 lines of the module'
)
elif self._is_new_module():
if len([line for line in header
if 'GNU General Public License' in line]) > 1:
self.reporter.error(
path=self.object_path,
code='use-short-gplv3-license',
msg='Found old style GPLv3 license header: '
'https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_documenting.html#copyright'
)
def _check_for_subprocess(self):
for child in self.ast.body:
if isinstance(child, ast.Import):
if child.names[0].name == 'subprocess':
for line_no, line in enumerate(self.text.splitlines()):
sp_match = SUBPROCESS_REGEX.search(line)
if sp_match:
self.reporter.error(
path=self.object_path,
code='use-run-command-not-popen',
msg=('subprocess.Popen call found. Should be module.run_command'),
line=(line_no + 1),
column=(sp_match.span()[0] + 1)
)
def _check_for_os_call(self):
if 'os.call' in self.text:
for line_no, line in enumerate(self.text.splitlines()):
os_call_match = OS_CALL_REGEX.search(line)
if os_call_match:
self.reporter.error(
path=self.object_path,
code='use-run-command-not-os-call',
msg=('os.call() call found. Should be module.run_command'),
line=(line_no + 1),
column=(os_call_match.span()[0] + 1)
)
def _find_blacklist_imports(self):
for child in self.ast.body:
names = []
if isinstance(child, ast.Import):
names.extend(child.names)
elif isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, ast.Import):
names.extend(grandchild.names)
for name in names:
# TODO: Add line/col
for blacklist_import, options in REJECTLIST_IMPORTS.items():
if re.search(blacklist_import, name.name):
new_only = options['new_only']
if self._is_new_module() and new_only:
self.reporter.error(
path=self.object_path,
**options['error']
)
elif not new_only:
self.reporter.error(
path=self.object_path,
**options['error']
)
def _find_module_utils(self, main):
linenos = []
found_basic = False
for child in self.ast.body:
if isinstance(child, (ast.Import, ast.ImportFrom)):
names = []
try:
names.append(child.module)
if child.module.endswith('.basic'):
found_basic = True
except AttributeError:
pass
names.extend([n.name for n in child.names])
if [n for n in names if n.startswith('ansible.module_utils')]:
linenos.append(child.lineno)
for name in child.names:
if ('module_utils' in getattr(child, 'module', '') and
isinstance(name, ast.alias) and
name.name == '*'):
msg = (
'module-utils-specific-import',
('module_utils imports should import specific '
'components, not "*"')
)
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=child.lineno
)
else:
self.reporter.warning(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=child.lineno
)
if (isinstance(name, ast.alias) and
name.name == 'basic'):
found_basic = True
if not found_basic:
self.reporter.warning(
path=self.object_path,
code='missing-module-utils-basic-import',
msg='Did not find "ansible.module_utils.basic" import'
)
return linenos
def _get_first_callable(self):
linenos = []
for child in self.ast.body:
if isinstance(child, (ast.FunctionDef, ast.ClassDef)):
linenos.append(child.lineno)
return min(linenos)
def _find_main_call(self, look_for="main"):
""" Ensure that the module ends with:
if __name__ == '__main__':
main()
OR, in the case of modules that are in the docs-only deprecation phase
if __name__ == '__main__':
removed_module()
"""
lineno = False
if_bodies = []
for child in self.ast.body:
if isinstance(child, ast.If):
try:
if child.test.left.id == '__name__':
if_bodies.extend(child.body)
except AttributeError:
pass
bodies = self.ast.body
bodies.extend(if_bodies)
for child in bodies:
# validate that the next to last line is 'if __name__ == "__main__"'
if child.lineno == (self.length - 1):
mainchecked = False
try:
if isinstance(child, ast.If) and \
child.test.left.id == '__name__' and \
len(child.test.ops) == 1 and \
isinstance(child.test.ops[0], ast.Eq) and \
child.test.comparators[0].s == '__main__':
mainchecked = True
except Exception:
pass
if not mainchecked:
self.reporter.error(
path=self.object_path,
code='missing-if-name-main',
msg='Next to last line should be: if __name__ == "__main__":',
line=child.lineno
)
# validate that the final line is a call to main()
if isinstance(child, ast.Expr):
if isinstance(child.value, ast.Call):
if (isinstance(child.value.func, ast.Name) and
child.value.func.id == look_for):
lineno = child.lineno
if lineno < self.length - 1:
self.reporter.error(
path=self.object_path,
code='last-line-main-call',
msg=('Call to %s() not the last line' % look_for),
line=lineno
)
if not lineno:
self.reporter.error(
path=self.object_path,
code='missing-main-call',
msg=('Did not find a call to %s()' % look_for)
)
return lineno or 0
def _find_has_import(self):
for child in self.ast.body:
found_try_except_import = False
found_has = False
if isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, ast.Import):
found_try_except_import = True
if isinstance(grandchild, ast.Assign):
for target in grandchild.targets:
if target.id.lower().startswith('has_'):
found_has = True
if found_try_except_import and not found_has:
# TODO: Add line/col
self.reporter.warning(
path=self.object_path,
code='try-except-missing-has',
msg='Found Try/Except block without HAS_ assignment'
)
def _ensure_imports_below_docs(self, doc_info, first_callable):
try:
min_doc_line = min(
[doc_info[key]['lineno'] for key in doc_info if doc_info[key]['lineno']]
)
except ValueError:
# We can't perform this validation, as there are no DOCs provided at all
return
max_doc_line = max(
[doc_info[key]['end_lineno'] for key in doc_info if doc_info[key]['end_lineno']]
)
import_lines = []
for child in self.ast.body:
if isinstance(child, (ast.Import, ast.ImportFrom)):
if isinstance(child, ast.ImportFrom) and child.module == '__future__':
# allowed from __future__ imports
for future_import in child.names:
if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS:
self.reporter.error(
path=self.object_path,
code='illegal-future-imports',
msg=('Only the following from __future__ imports are allowed: %s'
% ', '.join(self.ACCEPTLIST_FUTURE_IMPORTS)),
line=child.lineno
)
break
else: # for-else. If we didn't find a problem nad break out of the loop, then this is a legal import
continue
import_lines.append(child.lineno)
if child.lineno < min_doc_line:
self.reporter.error(
path=self.object_path,
code='import-before-documentation',
msg=('Import found before documentation variables. '
'All imports must appear below '
'DOCUMENTATION/EXAMPLES/RETURN.'),
line=child.lineno
)
break
elif isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, (ast.Import, ast.ImportFrom)):
import_lines.append(grandchild.lineno)
if grandchild.lineno < min_doc_line:
self.reporter.error(
path=self.object_path,
code='import-before-documentation',
msg=('Import found before documentation '
'variables. All imports must appear below '
'DOCUMENTATION/EXAMPLES/RETURN.'),
line=child.lineno
)
break
for import_line in import_lines:
if not (max_doc_line < import_line < first_callable):
msg = (
'import-placement',
('Imports should be directly below DOCUMENTATION/EXAMPLES/'
'RETURN.')
)
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=import_line
)
else:
self.reporter.warning(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=import_line
)
def _validate_ps_replacers(self):
# loop all (for/else + error)
# get module list for each
# check "shape" of each module name
module_requires = r'(?im)^#\s*requires\s+\-module(?:s?)\s*(Ansible\.ModuleUtils\..+)'
csharp_requires = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*(Ansible\..+)'
found_requires = False
for req_stmt in re.finditer(module_requires, self.text):
found_requires = True
# this will bomb on dictionary format - "don't do that"
module_list = [x.strip() for x in req_stmt.group(1).split(',')]
if len(module_list) > 1:
self.reporter.error(
path=self.object_path,
code='multiple-utils-per-requires',
msg='Ansible.ModuleUtils requirements do not support multiple modules per statement: "%s"' % req_stmt.group(0)
)
continue
module_name = module_list[0]
if module_name.lower().endswith('.psm1'):
self.reporter.error(
path=self.object_path,
code='invalid-requires-extension',
msg='Module #Requires should not end in .psm1: "%s"' % module_name
)
for req_stmt in re.finditer(csharp_requires, self.text):
found_requires = True
# this will bomb on dictionary format - "don't do that"
module_list = [x.strip() for x in req_stmt.group(1).split(',')]
if len(module_list) > 1:
self.reporter.error(
path=self.object_path,
code='multiple-csharp-utils-per-requires',
msg='Ansible C# util requirements do not support multiple utils per statement: "%s"' % req_stmt.group(0)
)
continue
module_name = module_list[0]
if module_name.lower().endswith('.cs'):
self.reporter.error(
path=self.object_path,
code='illegal-extension-cs',
msg='Module #AnsibleRequires -CSharpUtil should not end in .cs: "%s"' % module_name
)
# also accept the legacy #POWERSHELL_COMMON replacer signal
if not found_requires and REPLACER_WINDOWS not in self.text:
self.reporter.error(
path=self.object_path,
code='missing-module-utils-import-csharp-requirements',
msg='No Ansible.ModuleUtils or C# Ansible util requirements/imports found'
)
def _find_ps_docs_py_file(self):
if self.object_name in self.PS_DOC_REJECTLIST:
return
py_path = self.path.replace('.ps1', '.py')
if not os.path.isfile(py_path):
self.reporter.error(
path=self.object_path,
code='missing-python-doc',
msg='Missing python documentation file'
)
return py_path
def _get_docs(self):
docs = {
'DOCUMENTATION': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
'EXAMPLES': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
'RETURN': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
}
for child in self.ast.body:
if isinstance(child, ast.Assign):
for grandchild in child.targets:
if not isinstance(grandchild, ast.Name):
continue
if grandchild.id == 'DOCUMENTATION':
docs['DOCUMENTATION']['value'] = child.value.s
docs['DOCUMENTATION']['lineno'] = child.lineno
docs['DOCUMENTATION']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
elif grandchild.id == 'EXAMPLES':
docs['EXAMPLES']['value'] = child.value.s
docs['EXAMPLES']['lineno'] = child.lineno
docs['EXAMPLES']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
elif grandchild.id == 'RETURN':
docs['RETURN']['value'] = child.value.s
docs['RETURN']['lineno'] = child.lineno
docs['RETURN']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
return docs
def _validate_docs_schema(self, doc, schema, name, error_code):
# TODO: Add line/col
errors = []
try:
schema(doc)
except Exception as e:
for error in e.errors:
error.data = doc
errors.extend(e.errors)
for error in errors:
path = [str(p) for p in error.path]
local_error_code = getattr(error, 'ansible_error_code', error_code)
if isinstance(error.data, dict):
error_message = humanize_error(error.data, error)
else:
error_message = error
if path:
combined_path = '%s.%s' % (name, '.'.join(path))
else:
combined_path = name
self.reporter.error(
path=self.object_path,
code=local_error_code,
msg='%s: %s' % (combined_path, error_message)
)
def _validate_docs(self):
doc_info = self._get_docs()
doc = None
documentation_exists = False
examples_exist = False
returns_exist = False
# We have three ways of marking deprecated/removed files. Have to check each one
# individually and then make sure they all agree
filename_deprecated_or_removed = False
deprecated = False
removed = False
doc_deprecated = None # doc legally might not exist
routing_says_deprecated = False
if self.object_name.startswith('_') and not os.path.islink(self.object_path):
filename_deprecated_or_removed = True
# We are testing a collection
if self.routing:
routing_deprecation = self.routing.get('plugin_routing', {}).get('modules', {}).get(self.name, {}).get('deprecation', {})
if routing_deprecation:
# meta/runtime.yml says this is deprecated
routing_says_deprecated = True
deprecated = True
if not removed:
if not bool(doc_info['DOCUMENTATION']['value']):
self.reporter.error(
path=self.object_path,
code='missing-documentation',
msg='No DOCUMENTATION provided'
)
else:
documentation_exists = True
doc, errors, traces = parse_yaml(
doc_info['DOCUMENTATION']['value'],
doc_info['DOCUMENTATION']['lineno'],
self.name, 'DOCUMENTATION'
)
if doc:
add_collection_to_versions_and_dates(doc, self.collection_name, is_module=True)
for error in errors:
self.reporter.error(
path=self.object_path,
code='documentation-syntax-error',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
if not errors and not traces:
missing_fragment = False
with CaptureStd():
try:
get_docstring(self.path, fragment_loader, verbose=True,
collection_name=self.collection_name, is_module=True)
except AssertionError:
fragment = doc['extends_documentation_fragment']
self.reporter.error(
path=self.object_path,
code='missing-doc-fragment',
msg='DOCUMENTATION fragment missing: %s' % fragment
)
missing_fragment = True
except Exception as e:
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
self.reporter.error(
path=self.object_path,
code='documentation-error',
msg='Unknown DOCUMENTATION error, see TRACE: %s' % e
)
if not missing_fragment:
add_fragments(doc, self.object_path, fragment_loader=fragment_loader, is_module=True)
if 'options' in doc and doc['options'] is None:
self.reporter.error(
path=self.object_path,
code='invalid-documentation-options',
msg='DOCUMENTATION.options must be a dictionary/hash when used',
)
if 'deprecated' in doc and doc.get('deprecated'):
doc_deprecated = True
doc_deprecation = doc['deprecated']
documentation_collection = doc_deprecation.get('removed_from_collection')
if documentation_collection != self.collection_name:
self.reporter.error(
path=self.object_path,
code='deprecation-wrong-collection',
msg='"DOCUMENTATION.deprecation.removed_from_collection must be the current collection name: %r vs. %r' % (
documentation_collection, self.collection_name)
)
else:
doc_deprecated = False
if os.path.islink(self.object_path):
# This module has an alias, which we can tell as it's a symlink
# Rather than checking for `module: $filename` we need to check against the true filename
self._validate_docs_schema(
doc,
doc_schema(
os.readlink(self.object_path).split('.')[0],
for_collection=bool(self.collection),
deprecated_module=deprecated,
),
'DOCUMENTATION',
'invalid-documentation',
)
else:
# This is the normal case
self._validate_docs_schema(
doc,
doc_schema(
self.object_name.split('.')[0],
for_collection=bool(self.collection),
deprecated_module=deprecated,
),
'DOCUMENTATION',
'invalid-documentation',
)
if not self.collection:
existing_doc = self._check_for_new_args(doc)
self._check_version_added(doc, existing_doc)
if not bool(doc_info['EXAMPLES']['value']):
self.reporter.error(
path=self.object_path,
code='missing-examples',
msg='No EXAMPLES provided'
)
else:
_doc, errors, traces = parse_yaml(doc_info['EXAMPLES']['value'],
doc_info['EXAMPLES']['lineno'],
self.name, 'EXAMPLES', load_all=True)
for error in errors:
self.reporter.error(
path=self.object_path,
code='invalid-examples',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
if not bool(doc_info['RETURN']['value']):
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code='missing-return',
msg='No RETURN provided'
)
else:
self.reporter.warning(
path=self.object_path,
code='missing-return-legacy',
msg='No RETURN provided'
)
else:
data, errors, traces = parse_yaml(doc_info['RETURN']['value'],
doc_info['RETURN']['lineno'],
self.name, 'RETURN')
if data:
add_collection_to_versions_and_dates(data, self.collection_name, is_module=True, return_docs=True)
self._validate_docs_schema(data, return_schema(for_collection=bool(self.collection)),
'RETURN', 'return-syntax-error')
for error in errors:
self.reporter.error(
path=self.object_path,
code='return-syntax-error',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
# Check for mismatched deprecation
if not self.collection:
mismatched_deprecation = True
if not (filename_deprecated_or_removed or removed or deprecated or doc_deprecated):
mismatched_deprecation = False
else:
if (filename_deprecated_or_removed and deprecated and doc_deprecated):
mismatched_deprecation = False
if (filename_deprecated_or_removed and removed and not (documentation_exists or examples_exist or returns_exist)):
mismatched_deprecation = False
if mismatched_deprecation:
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='Module deprecation/removed must agree in documentaiton, by prepending filename with'
' "_", and setting DOCUMENTATION.deprecated for deprecation or by removing all'
' documentation for removed'
)
else:
# We are testing a collection
if self.object_name.startswith('_'):
self.reporter.error(
path=self.object_path,
code='collections-no-underscore-on-deprecation',
msg='Deprecated content in collections MUST NOT start with "_", update meta/runtime.yml instead',
)
if not (doc_deprecated == routing_says_deprecated):
# DOCUMENTATION.deprecated and meta/runtime.yml disagree
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree.'
)
elif routing_says_deprecated:
# Both DOCUMENTATION.deprecated and meta/runtime.yml agree that the module is deprecated.
# Make sure they give the same version or date.
routing_date = routing_deprecation.get('removal_date')
routing_version = routing_deprecation.get('removal_version')
# The versions and dates in the module documentation are auto-tagged, so remove the tag
# to make comparison possible and to avoid confusing the user.
documentation_date = doc_deprecation.get('removed_at_date')
documentation_version = doc_deprecation.get('removed_in')
if not compare_dates(routing_date, documentation_date):
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal date: %r vs. %r' % (
routing_date, documentation_date)
)
if routing_version != documentation_version:
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal version: %r vs. %r' % (
routing_version, documentation_version)
)
# In the future we should error if ANSIBLE_METADATA exists in a collection
return doc_info, doc
def _check_version_added(self, doc, existing_doc):
version_added_raw = doc.get('version_added')
try:
collection_name = doc.get('version_added_collection')
version_added = self._create_strict_version(
str(version_added_raw or '0.0'),
collection_name=collection_name)
except ValueError as e:
version_added = version_added_raw or '0.0'
if self._is_new_module() or version_added != 'historical':
# already reported during schema validation, except:
if version_added == 'historical':
self.reporter.error(
path=self.object_path,
code='module-invalid-version-added',
msg='version_added is not a valid version number: %r. Error: %s' % (version_added, e)
)
return
if existing_doc and str(version_added_raw) != str(existing_doc.get('version_added')):
self.reporter.error(
path=self.object_path,
code='module-incorrect-version-added',
msg='version_added should be %r. Currently %r' % (existing_doc.get('version_added'), version_added_raw)
)
if not self._is_new_module():
return
should_be = '.'.join(ansible_version.split('.')[:2])
strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin')
if (version_added < strict_ansible_version or
strict_ansible_version < version_added):
self.reporter.error(
path=self.object_path,
code='module-incorrect-version-added',
msg='version_added should be %r. Currently %r' % (should_be, version_added_raw)
)
def _validate_ansible_module_call(self, docs):
try:
spec, args, kwargs = get_argument_spec(self.path, self.collection)
except AnsibleModuleNotInitialized:
self.reporter.error(
path=self.object_path,
code='ansible-module-not-initialized',
msg="Execution of the module did not result in initialization of AnsibleModule",
)
return
except AnsibleModuleImportError as e:
self.reporter.error(
path=self.object_path,
code='import-error',
msg="Exception attempting to import module for argument_spec introspection, '%s'" % e
)
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
return
self._validate_docs_schema(kwargs, ansible_module_kwargs_schema(for_collection=bool(self.collection)),
'AnsibleModule', 'invalid-ansiblemodule-schema')
self._validate_argument_spec(docs, spec, kwargs)
def _validate_list_of_module_args(self, name, terms, spec, context):
if terms is None:
return
if not isinstance(terms, (list, tuple)):
# This is already reported by schema checking
return
for check in terms:
if not isinstance(check, (list, tuple)):
# This is already reported by schema checking
continue
bad_term = False
for term in check:
if not isinstance(term, string_types):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must contain strings in the lists or tuples; found value %r" % (term, )
self.reporter.error(
path=self.object_path,
code=name + '-type',
msg=msg,
)
bad_term = True
if bad_term:
continue
if len(set(check)) != len(check):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms"
self.reporter.error(
path=self.object_path,
code=name + '-collision',
msg=msg,
)
if not set(check) <= set(spec):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(check).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code=name + '-unknown',
msg=msg,
)
def _validate_required_if(self, terms, spec, context, module):
if terms is None:
return
if not isinstance(terms, (list, tuple)):
# This is already reported by schema checking
return
for check in terms:
if not isinstance(check, (list, tuple)) or len(check) not in [3, 4]:
# This is already reported by schema checking
continue
if len(check) == 4 and not isinstance(check[3], bool):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have forth value omitted or of type bool; got %r" % (check[3], )
self.reporter.error(
path=self.object_path,
code='required_if-is_one_of-type',
msg=msg,
)
requirements = check[2]
if not isinstance(requirements, (list, tuple)):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have third value (requirements) being a list or tuple; got type %r" % (requirements, )
self.reporter.error(
path=self.object_path,
code='required_if-requirements-type',
msg=msg,
)
continue
bad_term = False
for term in requirements:
if not isinstance(term, string_types):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have only strings in third value (requirements); got %r" % (term, )
self.reporter.error(
path=self.object_path,
code='required_if-requirements-type',
msg=msg,
)
bad_term = True
if bad_term:
continue
if len(set(requirements)) != len(requirements):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms in requirements"
self.reporter.error(
path=self.object_path,
code='required_if-requirements-collision',
msg=msg,
)
if not set(requirements) <= set(spec):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms in requirements which are not part of argument_spec: %s" % ", ".join(sorted(set(requirements).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code='required_if-requirements-unknown',
msg=msg,
)
key = check[0]
if key not in spec:
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have its key %s in argument_spec" % key
self.reporter.error(
path=self.object_path,
code='required_if-unknown-key',
msg=msg,
)
continue
if key in requirements:
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains its key %s in requirements" % key
self.reporter.error(
path=self.object_path,
code='required_if-key-in-requirements',
msg=msg,
)
value = check[1]
if value is not None:
_type = spec[key].get('type', 'str')
if callable(_type):
_type_checker = _type
else:
_type_checker = module._CHECK_ARGUMENT_TYPES_DISPATCHER.get(_type)
try:
with CaptureStd():
dummy = _type_checker(value)
except (Exception, SystemExit):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has value %r which does not fit to %s's parameter type %r" % (value, key, _type)
self.reporter.error(
path=self.object_path,
code='required_if-value-type',
msg=msg,
)
def _validate_required_by(self, terms, spec, context):
if terms is None:
return
if not isinstance(terms, Mapping):
# This is already reported by schema checking
return
for key, value in terms.items():
if isinstance(value, string_types):
value = [value]
if not isinstance(value, (list, tuple)):
# This is already reported by schema checking
continue
for term in value:
if not isinstance(term, string_types):
# This is already reported by schema checking
continue
if len(set(value)) != len(value) or key in value:
msg = "required_by"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms"
self.reporter.error(
path=self.object_path,
code='required_by-collision',
msg=msg,
)
if not set(value) <= set(spec) or key not in spec:
msg = "required_by"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(value).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code='required_by-unknown',
msg=msg,
)
def _validate_argument_spec(self, docs, spec, kwargs, context=None, last_context_spec=None):
if not self.analyze_arg_spec:
return
if docs is None:
docs = {}
if context is None:
context = []
if last_context_spec is None:
last_context_spec = kwargs
try:
if not context:
add_fragments(docs, self.object_path, fragment_loader=fragment_loader, is_module=True)
except Exception:
# Cannot merge fragments
return
# Use this to access type checkers later
module = NoArgsAnsibleModule({})
self._validate_list_of_module_args('mutually_exclusive', last_context_spec.get('mutually_exclusive'), spec, context)
self._validate_list_of_module_args('required_together', last_context_spec.get('required_together'), spec, context)
self._validate_list_of_module_args('required_one_of', last_context_spec.get('required_one_of'), spec, context)
self._validate_required_if(last_context_spec.get('required_if'), spec, context, module)
self._validate_required_by(last_context_spec.get('required_by'), spec, context)
provider_args = set()
args_from_argspec = set()
deprecated_args_from_argspec = set()
doc_options = docs.get('options', {})
if doc_options is None:
doc_options = {}
for arg, data in spec.items():
restricted_argument_names = ('message', 'syslog_facility')
if arg.lower() in restricted_argument_names:
msg = "Argument '%s' in argument_spec " % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += "must not be one of %s as it is used " \
"internally by Ansible Core Engine" % (",".join(restricted_argument_names))
self.reporter.error(
path=self.object_path,
code='invalid-argument-name',
msg=msg,
)
continue
if 'aliases' in data:
for al in data['aliases']:
if al.lower() in restricted_argument_names:
msg = "Argument alias '%s' in argument_spec " % al
if context:
msg += " found in %s" % " -> ".join(context)
msg += "must not be one of %s as it is used " \
"internally by Ansible Core Engine" % (",".join(restricted_argument_names))
self.reporter.error(
path=self.object_path,
code='invalid-argument-name',
msg=msg,
)
continue
if not isinstance(data, dict):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must be a dictionary/hash when used"
self.reporter.error(
path=self.object_path,
code='invalid-argument-spec',
msg=msg,
)
continue
removed_at_date = data.get('removed_at_date', None)
if removed_at_date is not None:
try:
if parse_isodate(removed_at_date, allow_date=False) < datetime.date.today():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has a removed_at_date '%s' before today" % removed_at_date
self.reporter.error(
path=self.object_path,
code='deprecated-date',
msg=msg,
)
except ValueError:
# This should only happen when removed_at_date is not in ISO format. Since schema
# validation already reported this as an error, don't report it a second time.
pass
deprecated_aliases = data.get('deprecated_aliases', None)
if deprecated_aliases is not None:
for deprecated_alias in deprecated_aliases:
if 'name' in deprecated_alias and 'date' in deprecated_alias:
try:
date = deprecated_alias['date']
if parse_isodate(date, allow_date=False) < datetime.date.today():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with removal date '%s' before today" % (
deprecated_alias['name'], deprecated_alias['date'])
self.reporter.error(
path=self.object_path,
code='deprecated-date',
msg=msg,
)
except ValueError:
# This should only happen when deprecated_alias['date'] is not in ISO format. Since
# schema validation already reported this as an error, don't report it a second
# time.
pass
has_version = False
if self.collection and self.collection_version is not None:
compare_version = self.collection_version
version_of_what = "this collection (%s)" % self.collection_version_str
code_prefix = 'collection'
has_version = True
elif not self.collection:
compare_version = LOOSE_ANSIBLE_VERSION
version_of_what = "Ansible (%s)" % ansible_version
code_prefix = 'ansible'
has_version = True
removed_in_version = data.get('removed_in_version', None)
if removed_in_version is not None:
try:
collection_name = data.get('removed_from_collection')
removed_in = self._create_version(str(removed_in_version), collection_name=collection_name)
if has_version and collection_name == self.collection_name and compare_version >= removed_in:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has a deprecated removed_in_version %r," % removed_in_version
msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code=code_prefix + '-deprecated-version',
msg=msg,
)
except ValueError as e:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has an invalid removed_in_version number %r: %s" % (removed_in_version, e)
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
except TypeError:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has an invalid removed_in_version number %r: " % (removed_in_version, )
msg += " error while comparing to version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
if deprecated_aliases is not None:
for deprecated_alias in deprecated_aliases:
if 'name' in deprecated_alias and 'version' in deprecated_alias:
try:
collection_name = deprecated_alias.get('collection_name')
version = self._create_version(str(deprecated_alias['version']), collection_name=collection_name)
if has_version and collection_name == self.collection_name and compare_version >= version:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with removal in version %r," % (
deprecated_alias['name'], deprecated_alias['version'])
msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code=code_prefix + '-deprecated-version',
msg=msg,
)
except ValueError as e:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with invalid removal version %r: %s" % (
deprecated_alias['name'], deprecated_alias['version'], e)
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
except TypeError:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with invalid removal version %r:" % (
deprecated_alias['name'], deprecated_alias['version'])
msg += " error while comparing to version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
aliases = data.get('aliases', [])
if arg in aliases:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is specified as its own alias"
self.reporter.error(
path=self.object_path,
code='parameter-alias-self',
msg=msg
)
if len(aliases) > len(set(aliases)):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has at least one alias specified multiple times in aliases"
self.reporter.error(
path=self.object_path,
code='parameter-alias-repeated',
msg=msg
)
if not context and arg == 'state':
bad_states = set(['list', 'info', 'get']) & set(data.get('choices', set()))
for bad_state in bad_states:
self.reporter.error(
path=self.object_path,
code='parameter-state-invalid-choice',
msg="Argument 'state' includes the value '%s' as a choice" % bad_state)
if not data.get('removed_in_version', None) and not data.get('removed_at_date', None):
args_from_argspec.add(arg)
args_from_argspec.update(aliases)
else:
deprecated_args_from_argspec.add(arg)
deprecated_args_from_argspec.update(aliases)
if arg == 'provider' and self.object_path.startswith('lib/ansible/modules/network/'):
if data.get('options') is not None and not isinstance(data.get('options'), Mapping):
self.reporter.error(
path=self.object_path,
code='invalid-argument-spec-options',
msg="Argument 'options' in argument_spec['provider'] must be a dictionary/hash when used",
)
elif data.get('options'):
# Record provider options from network modules, for later comparison
for provider_arg, provider_data in data.get('options', {}).items():
provider_args.add(provider_arg)
provider_args.update(provider_data.get('aliases', []))
if data.get('required') and data.get('default', object) != object:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is marked as required but specifies a default. Arguments with a" \
" default should not be marked as required"
self.reporter.error(
path=self.object_path,
code='no-default-for-required-parameter',
msg=msg
)
if arg in provider_args:
# Provider args are being removed from network module top level
# don't validate docs<->arg_spec checks below
continue
_type = data.get('type', 'str')
if callable(_type):
_type_checker = _type
else:
_type_checker = module._CHECK_ARGUMENT_TYPES_DISPATCHER.get(_type)
_elements = data.get('elements')
if (_type == 'list') and not _elements:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as list but elements is not defined"
self.reporter.error(
path=self.object_path,
code='parameter-list-no-elements',
msg=msg
)
if _elements:
if not callable(_elements):
module._CHECK_ARGUMENT_TYPES_DISPATCHER.get(_elements)
if _type != 'list':
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines elements as %s but it is valid only when value of parameter type is list" % _elements
self.reporter.error(
path=self.object_path,
code='parameter-invalid-elements',
msg=msg
)
arg_default = None
if 'default' in data and not is_empty(data['default']):
try:
with CaptureStd():
arg_default = _type_checker(data['default'])
except (Exception, SystemExit):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but this is incompatible with parameter type %r" % (data['default'], _type)
self.reporter.error(
path=self.object_path,
code='incompatible-default-type',
msg=msg
)
continue
elif data.get('default') is None and _type == 'bool' and 'options' not in data:
arg_default = False
doc_options_args = []
for alias in sorted(set([arg] + list(aliases))):
if alias in doc_options:
doc_options_args.append(alias)
if len(doc_options_args) == 0:
# Undocumented arguments will be handled later (search for undocumented-parameter)
doc_options_arg = {}
else:
doc_options_arg = doc_options[doc_options_args[0]]
if len(doc_options_args) > 1:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " with aliases %s is documented multiple times, namely as %s" % (
", ".join([("'%s'" % alias) for alias in aliases]),
", ".join([("'%s'" % alias) for alias in doc_options_args])
)
self.reporter.error(
path=self.object_path,
code='parameter-documented-multiple-times',
msg=msg
)
try:
doc_default = None
if 'default' in doc_options_arg and not is_empty(doc_options_arg['default']):
with CaptureStd():
doc_default = _type_checker(doc_options_arg['default'])
elif doc_options_arg.get('default') is None and _type == 'bool' and 'suboptions' not in doc_options_arg:
doc_default = False
except (Exception, SystemExit):
msg = "Argument '%s' in documentation" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but this is incompatible with parameter type %r" % (doc_options_arg.get('default'), _type)
self.reporter.error(
path=self.object_path,
code='doc-default-incompatible-type',
msg=msg
)
continue
if arg_default != doc_default:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but documentation defines default as (%r)" % (arg_default, doc_default)
self.reporter.error(
path=self.object_path,
code='doc-default-does-not-match-spec',
msg=msg
)
doc_type = doc_options_arg.get('type')
if 'type' in data and data['type'] is not None:
if doc_type is None:
if not arg.startswith('_'): # hidden parameter, for example _raw_params
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as %r but documentation doesn't define type" % (data['type'])
self.reporter.error(
path=self.object_path,
code='parameter-type-not-in-doc',
msg=msg
)
elif data['type'] != doc_type:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as %r but documentation defines type as %r" % (data['type'], doc_type)
self.reporter.error(
path=self.object_path,
code='doc-type-does-not-match-spec',
msg=msg
)
else:
if doc_type is None:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " uses default type ('str') but documentation doesn't define type"
self.reporter.error(
path=self.object_path,
code='doc-missing-type',
msg=msg
)
elif doc_type != 'str':
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " implies type as 'str' but documentation defines as %r" % doc_type
self.reporter.error(
path=self.object_path,
code='implied-parameter-type-mismatch',
msg=msg
)
doc_choices = []
try:
for choice in doc_options_arg.get('choices', []):
try:
with CaptureStd():
doc_choices.append(_type_checker(choice))
except (Exception, SystemExit):
msg = "Argument '%s' in documentation" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type)
self.reporter.error(
path=self.object_path,
code='doc-choices-incompatible-type',
msg=msg
)
raise StopIteration()
except StopIteration:
continue
arg_choices = []
try:
for choice in data.get('choices', []):
try:
with CaptureStd():
arg_choices.append(_type_checker(choice))
except (Exception, SystemExit):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type)
self.reporter.error(
path=self.object_path,
code='incompatible-choices',
msg=msg
)
raise StopIteration()
except StopIteration:
continue
if not compare_unordered_lists(arg_choices, doc_choices):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but documentation defines choices as (%r)" % (arg_choices, doc_choices)
self.reporter.error(
path=self.object_path,
code='doc-choices-do-not-match-spec',
msg=msg
)
doc_required = doc_options_arg.get('required', False)
data_required = data.get('required', False)
if (doc_required or data_required) and not (doc_required and data_required):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
if doc_required:
msg += " is not required, but is documented as being required"
else:
msg += " is required, but is not documented as being required"
self.reporter.error(
path=self.object_path,
code='doc-required-mismatch',
msg=msg
)
doc_elements = doc_options_arg.get('elements', None)
doc_type = doc_options_arg.get('type', 'str')
data_elements = data.get('elements', None)
if (doc_elements and not doc_type == 'list'):
msg = "Argument '%s' " % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines parameter elements as %s but it is valid only when value of parameter type is list" % doc_elements
self.reporter.error(
path=self.object_path,
code='doc-elements-invalid',
msg=msg
)
if (doc_elements or data_elements) and not (doc_elements == data_elements):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
if data_elements:
msg += " specifies elements as %s," % data_elements
else:
msg += " does not specify elements,"
if doc_elements:
msg += "but elements is documented as being %s" % doc_elements
else:
msg += "but elements is not documented"
self.reporter.error(
path=self.object_path,
code='doc-elements-mismatch',
msg=msg
)
spec_suboptions = data.get('options')
doc_suboptions = doc_options_arg.get('suboptions', {})
if spec_suboptions:
if not doc_suboptions:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has sub-options but documentation does not define it"
self.reporter.error(
path=self.object_path,
code='missing-suboption-docs',
msg=msg
)
self._validate_argument_spec({'options': doc_suboptions}, spec_suboptions, kwargs,
context=context + [arg], last_context_spec=data)
for arg in args_from_argspec:
if not str(arg).isidentifier():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is not a valid python identifier"
self.reporter.error(
path=self.object_path,
code='parameter-invalid',
msg=msg
)
if docs:
args_from_docs = set()
for arg, data in doc_options.items():
args_from_docs.add(arg)
args_from_docs.update(data.get('aliases', []))
args_missing_from_docs = args_from_argspec.difference(args_from_docs)
docs_missing_from_args = args_from_docs.difference(args_from_argspec | deprecated_args_from_argspec)
for arg in args_missing_from_docs:
if arg in provider_args:
# Provider args are being removed from network module top level
# So they are likely not documented on purpose
continue
msg = "Argument '%s'" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is listed in the argument_spec, but not documented in the module documentation"
self.reporter.error(
path=self.object_path,
code='undocumented-parameter',
msg=msg
)
for arg in docs_missing_from_args:
msg = "Argument '%s'" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is listed in DOCUMENTATION.options, but not accepted by the module argument_spec"
self.reporter.error(
path=self.object_path,
code='nonexistent-parameter-documented',
msg=msg
)
def _check_for_new_args(self, doc):
if not self.base_branch or self._is_new_module():
return
with CaptureStd():
try:
existing_doc, dummy_examples, dummy_return, existing_metadata = get_docstring(
self.base_module, fragment_loader, verbose=True, collection_name=self.collection_name, is_module=True)
existing_options = existing_doc.get('options', {}) or {}
except AssertionError:
fragment = doc['extends_documentation_fragment']
self.reporter.warning(
path=self.object_path,
code='missing-existing-doc-fragment',
msg='Pre-existing DOCUMENTATION fragment missing: %s' % fragment
)
return
except Exception as e:
self.reporter.warning_trace(
path=self.object_path,
tracebk=e
)
self.reporter.warning(
path=self.object_path,
code='unknown-doc-fragment',
msg=('Unknown pre-existing DOCUMENTATION error, see TRACE. Submodule refs may need updated')
)
return
try:
mod_collection_name = existing_doc.get('version_added_collection')
mod_version_added = self._create_strict_version(
str(existing_doc.get('version_added', '0.0')),
collection_name=mod_collection_name)
except ValueError:
mod_collection_name = self.collection_name
mod_version_added = self._create_strict_version('0.0')
options = doc.get('options', {}) or {}
should_be = '.'.join(ansible_version.split('.')[:2])
strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin')
for option, details in options.items():
try:
names = [option] + details.get('aliases', [])
except (TypeError, AttributeError):
# Reporting of this syntax error will be handled by schema validation.
continue
if any(name in existing_options for name in names):
# The option already existed. Make sure version_added didn't change.
for name in names:
existing_collection_name = existing_options.get(name, {}).get('version_added_collection')
existing_version = existing_options.get(name, {}).get('version_added')
if existing_version:
break
current_collection_name = details.get('version_added_collection')
current_version = details.get('version_added')
if current_collection_name != existing_collection_name:
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added-collection',
msg=('version_added for existing option (%s) should '
'belong to collection %r. Currently belongs to %r' %
(option, current_collection_name, existing_collection_name))
)
elif str(current_version) != str(existing_version):
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added',
msg=('version_added for existing option (%s) should '
'be %r. Currently %r' %
(option, existing_version, current_version))
)
continue
try:
collection_name = details.get('version_added_collection')
version_added = self._create_strict_version(
str(details.get('version_added', '0.0')),
collection_name=collection_name)
except ValueError as e:
# already reported during schema validation
continue
if collection_name != self.collection_name:
continue
if (strict_ansible_version != mod_version_added and
(version_added < strict_ansible_version or
strict_ansible_version < version_added)):
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added',
msg=('version_added for new option (%s) should '
'be %r. Currently %r' %
(option, should_be, version_added))
)
return existing_doc
@staticmethod
def is_blacklisted(path):
base_name = os.path.basename(path)
file_name = os.path.splitext(base_name)[0]
if file_name.startswith('_') and os.path.islink(path):
return True
if not frozenset((base_name, file_name)).isdisjoint(ModuleValidator.REJECTLIST):
return True
for pat in ModuleValidator.REJECTLIST_PATTERNS:
if fnmatch(base_name, pat):
return True
return False
def validate(self):
super(ModuleValidator, self).validate()
if not self._python_module() and not self._powershell_module():
self.reporter.error(
path=self.object_path,
code='invalid-extension',
msg=('Official Ansible modules must have a .py '
'extension for python modules or a .ps1 '
'for powershell modules')
)
self._python_module_override = True
if self._python_module() and self.ast is None:
self.reporter.error(
path=self.object_path,
code='python-syntax-error',
msg='Python SyntaxError while parsing module'
)
try:
compile(self.text, self.path, 'exec')
except Exception:
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
return
end_of_deprecation_should_be_removed_only = False
if self._python_module():
doc_info, docs = self._validate_docs()
# See if current version => deprecated.removed_in, ie, should be docs only
if docs and docs.get('deprecated', False):
if 'removed_in' in docs['deprecated']:
removed_in = None
collection_name = docs['deprecated'].get('removed_from_collection')
version = docs['deprecated']['removed_in']
if collection_name != self.collection_name:
self.reporter.error(
path=self.object_path,
code='invalid-module-deprecation-source',
msg=('The deprecation version for a module must be added in this collection')
)
else:
try:
removed_in = self._create_strict_version(str(version), collection_name=collection_name)
except ValueError as e:
self.reporter.error(
path=self.object_path,
code='invalid-module-deprecation-version',
msg=('The deprecation version %r cannot be parsed: %s' % (version, e))
)
if removed_in:
if not self.collection:
strict_ansible_version = self._create_strict_version(
'.'.join(ansible_version.split('.')[:2]), self.collection_name)
end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in
elif self.collection_version:
strict_ansible_version = self.collection_version
end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in
# handle deprecation by date
if 'removed_at_date' in docs['deprecated']:
try:
removed_at_date = docs['deprecated']['removed_at_date']
if parse_isodate(removed_at_date, allow_date=True) < datetime.date.today():
msg = "Module's deprecated.removed_at_date date '%s' is before today" % removed_at_date
self.reporter.error(path=self.object_path, code='deprecated-date', msg=msg)
except ValueError:
# This happens if the date cannot be parsed. This is already checked by the schema.
pass
if self._python_module() and not self._just_docs() and not end_of_deprecation_should_be_removed_only:
self._validate_ansible_module_call(docs)
self._check_for_sys_exit()
self._find_blacklist_imports()
main = self._find_main_call()
self._find_module_utils(main)
self._find_has_import()
first_callable = self._get_first_callable()
self._ensure_imports_below_docs(doc_info, first_callable)
self._check_for_subprocess()
self._check_for_os_call()
if self._powershell_module():
if self.basename in self.PS_DOC_REJECTLIST:
return
self._validate_ps_replacers()
docs_path = self._find_ps_docs_py_file()
# We can only validate PowerShell arg spec if it is using the new Ansible.Basic.AnsibleModule util
pattern = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*Ansible\.Basic'
if re.search(pattern, self.text) and self.object_name not in self.PS_ARG_VALIDATE_REJECTLIST:
with ModuleValidator(docs_path, base_branch=self.base_branch, git_cache=self.git_cache) as docs_mv:
docs = docs_mv._validate_docs()[1]
self._validate_ansible_module_call(docs)
self._check_gpl3_header()
if not self._just_docs() and not end_of_deprecation_should_be_removed_only:
self._check_interpreter(powershell=self._powershell_module())
self._check_type_instead_of_isinstance(
powershell=self._powershell_module()
)
if end_of_deprecation_should_be_removed_only:
# Ensure that `if __name__ == '__main__':` calls `removed_module()` which ensure that the module has no code in
main = self._find_main_call('removed_module')
# FIXME: Ensure that the version in the call to removed_module is less than +2.
# Otherwise it's time to remove the file (This may need to be done in another test to
# avoid breaking whenever the Ansible version bumps)
class PythonPackageValidator(Validator):
REJECTLIST_FILES = frozenset(('__pycache__',))
def __init__(self, path, reporter=None):
super(PythonPackageValidator, self).__init__(reporter=reporter or Reporter())
self.path = path
self.basename = os.path.basename(path)
@property
def object_name(self):
return self.basename
@property
def object_path(self):
return self.path
def validate(self):
super(PythonPackageValidator, self).validate()
if self.basename in self.REJECTLIST_FILES:
return
init_file = os.path.join(self.path, '__init__.py')
if not os.path.exists(init_file):
self.reporter.error(
path=self.object_path,
code='subdirectory-missing-init',
msg='Ansible module subdirectories must contain an __init__.py'
)
def setup_collection_loader():
collections_paths = os.environ.get('ANSIBLE_COLLECTIONS_PATH', '').split(os.pathsep)
_AnsibleCollectionFinder(collections_paths)
def re_compile(value):
"""
Argparse expects things to raise TypeError, re.compile raises an re.error
exception
This function is a shorthand to convert the re.error exception to a
TypeError
"""
try:
return re.compile(value)
except re.error as e:
raise TypeError(e)
def run():
parser = argparse.ArgumentParser(prog="validate-modules")
parser.add_argument('modules', nargs='+',
help='Path to module or module directory')
parser.add_argument('-w', '--warnings', help='Show warnings',
action='store_true')
parser.add_argument('--exclude', help='RegEx exclusion pattern',
type=re_compile)
parser.add_argument('--arg-spec', help='Analyze module argument spec',
action='store_true', default=False)
parser.add_argument('--base-branch', default=None,
help='Used in determining if new options were added')
parser.add_argument('--format', choices=['json', 'plain'], default='plain',
help='Output format. Default: "%(default)s"')
parser.add_argument('--output', default='-',
help='Output location, use "-" for stdout. '
'Default "%(default)s"')
parser.add_argument('--collection',
help='Specifies the path to the collection, when '
'validating files within a collection. Ensure '
'that ANSIBLE_COLLECTIONS_PATH is set so the '
'contents of the collection can be located')
parser.add_argument('--collection-version',
help='The collection\'s version number used to check '
'deprecations')
args = parser.parse_args()
args.modules = [m.rstrip('/') for m in args.modules]
reporter = Reporter()
git_cache = GitCache(args.base_branch)
check_dirs = set()
routing = None
if args.collection:
setup_collection_loader()
routing_file = 'meta/runtime.yml'
# Load meta/runtime.yml if it exists, as it may contain deprecation information
if os.path.isfile(routing_file):
try:
with open(routing_file) as f:
routing = yaml.safe_load(f)
except yaml.error.MarkedYAMLError as ex:
print('%s:%d:%d: YAML load failed: %s' % (routing_file, ex.context_mark.line + 1, ex.context_mark.column + 1, re.sub(r'\s+', ' ', str(ex))))
except Exception as ex: # pylint: disable=broad-except
print('%s:%d:%d: YAML load failed: %s' % (routing_file, 0, 0, re.sub(r'\s+', ' ', str(ex))))
for module in args.modules:
if os.path.isfile(module):
path = module
if args.exclude and args.exclude.search(path):
continue
if ModuleValidator.is_blacklisted(path):
continue
with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version,
analyze_arg_spec=args.arg_spec, base_branch=args.base_branch,
git_cache=git_cache, reporter=reporter, routing=routing) as mv1:
mv1.validate()
check_dirs.add(os.path.dirname(path))
for root, dirs, files in os.walk(module):
basedir = root[len(module) + 1:].split('/', 1)[0]
if basedir in REJECTLIST_DIRS:
continue
for dirname in dirs:
if root == module and dirname in REJECTLIST_DIRS:
continue
path = os.path.join(root, dirname)
if args.exclude and args.exclude.search(path):
continue
check_dirs.add(path)
for filename in files:
path = os.path.join(root, filename)
if args.exclude and args.exclude.search(path):
continue
if ModuleValidator.is_blacklisted(path):
continue
with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version,
analyze_arg_spec=args.arg_spec, base_branch=args.base_branch,
git_cache=git_cache, reporter=reporter, routing=routing) as mv2:
mv2.validate()
if not args.collection:
for path in sorted(check_dirs):
pv = PythonPackageValidator(path, reporter=reporter)
pv.validate()
if args.format == 'plain':
sys.exit(reporter.plain(warnings=args.warnings, output=args.output))
else:
sys.exit(reporter.json(warnings=args.warnings, output=args.output))
class GitCache:
def __init__(self, base_branch):
self.base_branch = base_branch
if self.base_branch:
self.base_tree = self._git(['ls-tree', '-r', '--name-only', self.base_branch, 'lib/ansible/modules/'])
else:
self.base_tree = []
try:
self.head_tree = self._git(['ls-tree', '-r', '--name-only', 'HEAD', 'lib/ansible/modules/'])
except GitError as ex:
if ex.status == 128:
# fallback when there is no .git directory
self.head_tree = self._get_module_files()
else:
raise
except OSError as ex:
if ex.errno == errno.ENOENT:
# fallback when git is not installed
self.head_tree = self._get_module_files()
else:
raise
self.base_module_paths = dict((os.path.basename(p), p) for p in self.base_tree if os.path.splitext(p)[1] in ('.py', '.ps1'))
self.base_module_paths.pop('__init__.py', None)
self.head_aliased_modules = set()
for path in self.head_tree:
filename = os.path.basename(path)
if filename.startswith('_') and filename != '__init__.py':
if os.path.islink(path):
self.head_aliased_modules.add(os.path.basename(os.path.realpath(path)))
@staticmethod
def _get_module_files():
module_files = []
for (dir_path, dir_names, file_names) in os.walk('lib/ansible/modules/'):
for file_name in file_names:
module_files.append(os.path.join(dir_path, file_name))
return module_files
@staticmethod
def _git(args):
cmd = ['git'] + args
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
raise GitError(stderr, p.returncode)
return stdout.decode('utf-8').splitlines()
class GitError(Exception):
def __init__(self, message, status):
super(GitError, self).__init__(message)
self.status = status
def main():
try:
run()
except KeyboardInterrupt:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 69,561 |
ansible-test validate-modules: missing `default` in docs not detected if `default=False` in argspec
|
##### SUMMARY
Happened here: https://github.com/ansible-collections/community.general/pull/341
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/69561
|
https://github.com/ansible/ansible/pull/72699
|
f94ba68d8f287918456c5de8115dafb0c69e8e7c
|
5226ac5778d3b57296b925de5d4ad0b485bb11cd
| 2020-05-16T12:58:26Z |
python
| 2020-12-04T17:13:14Z |
test/sanity/ignore.txt
|
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/my_test.py shebang # example module but not in a normal module location
examples/scripts/my_test_facts.py shebang # example module but not in a normal module location
examples/scripts/my_test_info.py shebang # example module but not in a normal module location
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.6!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.7!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-3.5!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.6!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.7!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-3.5!skip # release and docs process only, 3.6+ required
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/galaxy/collection/__init__.py compile-2.6!skip # 'ansible-galaxy collection' requires 2.7+
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate # testing Python 2.x implicit relative imports
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/constraints.txt test-constraints
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py metaclass-boilerplate
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py future-import-boilerplate # test expects no boilerplate
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py metaclass-boilerplate # test expects no boilerplate
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
test/utils/shippable/check_matrix.py replace-urlopen
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,873 |
fileglob lookup plugin returns inconsistent values
|
##### SUMMARY
The `fileglob` lookup plugin returns values inconsistently, depending on the order of arguments (terms) given it.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
fileglob
##### ANSIBLE VERSION
```paste below
ansible 2.10.3
config file = /Users/jpm/take/training/bugs/fileglob/ansible.cfg
configured module search path = ['/Users/jpm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jpm/take/training/bugs/fileglob/env.v3/lib/python3.8/site-packages/ansible
executable location = /Users/jpm/take/training/bugs/fileglob/env.v3/bin/ansible
python version = 3.8.5 (default, Sep 27 2020, 11:38:54) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/jpm/take/training/bugs/fileglob/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS Catalina (10.15.7)
##### STEPS TO REPRODUCE
Run the example playbook
```yaml
- hosts: localhost
connection: local
gather_facts: false
vars:
dir: files
tasks:
- file: path='{{ dir }}' state=directory
- file: path='setvars.bat' state=touch # in current directory!
- file: path='{{ dir }}/{{ item }}' state=touch
loop:
- json.c
- strlcpy.c
- base64.c
- json.h
- base64.h
- strlcpy.h
- jo.c
- debug:
msg: '{{ query("fileglob", "setvars.bat", "{{ dir }}/*.[ch]" ) }}'
- debug:
msg: '{{ query("fileglob", "{{ dir }}/*.[ch]", "setvars.bat" ) }}'
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expected the path `setvars.bat` to be contained in both result sets, but it's in the first only. The results are identical if I use `lookup()` instead of `query()` and when I use `with_fileglob:`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] ***************************************************************
TASK [file] ********************************************************************
ok: [localhost]
TASK [file] ********************************************************************
changed: [localhost] => (item=json.c)
changed: [localhost] => (item=strlcpy.c)
changed: [localhost] => (item=base64.c)
changed: [localhost] => (item=json.h)
changed: [localhost] => (item=base64.h)
changed: [localhost] => (item=strlcpy.h)
changed: [localhost] => (item=jo.c)
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/setvars.bat",
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72873
|
https://github.com/ansible/ansible/pull/72879
|
e97f333532cc1bc140b2377237f0f4e85b2b5f6f
|
fe17cb6eba0df9e6c19e6dd8a78701fca6fa70e4
| 2020-12-06T11:26:14Z |
python
| 2020-12-08T15:31:34Z |
changelogs/fragments/72873-fix-fileglob-ordering.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,873 |
fileglob lookup plugin returns inconsistent values
|
##### SUMMARY
The `fileglob` lookup plugin returns values inconsistently, depending on the order of arguments (terms) given it.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
fileglob
##### ANSIBLE VERSION
```paste below
ansible 2.10.3
config file = /Users/jpm/take/training/bugs/fileglob/ansible.cfg
configured module search path = ['/Users/jpm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jpm/take/training/bugs/fileglob/env.v3/lib/python3.8/site-packages/ansible
executable location = /Users/jpm/take/training/bugs/fileglob/env.v3/bin/ansible
python version = 3.8.5 (default, Sep 27 2020, 11:38:54) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/jpm/take/training/bugs/fileglob/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS Catalina (10.15.7)
##### STEPS TO REPRODUCE
Run the example playbook
```yaml
- hosts: localhost
connection: local
gather_facts: false
vars:
dir: files
tasks:
- file: path='{{ dir }}' state=directory
- file: path='setvars.bat' state=touch # in current directory!
- file: path='{{ dir }}/{{ item }}' state=touch
loop:
- json.c
- strlcpy.c
- base64.c
- json.h
- base64.h
- strlcpy.h
- jo.c
- debug:
msg: '{{ query("fileglob", "setvars.bat", "{{ dir }}/*.[ch]" ) }}'
- debug:
msg: '{{ query("fileglob", "{{ dir }}/*.[ch]", "setvars.bat" ) }}'
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expected the path `setvars.bat` to be contained in both result sets, but it's in the first only. The results are identical if I use `lookup()` instead of `query()` and when I use `with_fileglob:`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] ***************************************************************
TASK [file] ********************************************************************
ok: [localhost]
TASK [file] ********************************************************************
changed: [localhost] => (item=json.c)
changed: [localhost] => (item=strlcpy.c)
changed: [localhost] => (item=base64.c)
changed: [localhost] => (item=json.h)
changed: [localhost] => (item=base64.h)
changed: [localhost] => (item=strlcpy.h)
changed: [localhost] => (item=jo.c)
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/setvars.bat",
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72873
|
https://github.com/ansible/ansible/pull/72879
|
e97f333532cc1bc140b2377237f0f4e85b2b5f6f
|
fe17cb6eba0df9e6c19e6dd8a78701fca6fa70e4
| 2020-12-06T11:26:14Z |
python
| 2020-12-08T15:31:34Z |
lib/ansible/plugins/lookup/fileglob.py
|
# (c) 2012, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: fileglob
author: Michael DeHaan
version_added: "1.4"
short_description: list files matching a pattern
description:
- Matches all files in a single directory, non-recursively, that match a pattern.
It calls Python's "glob" library.
options:
_terms:
description: path(s) of files to read
required: True
notes:
- Patterns are only supported on files, not directory/paths.
- Matching is against local system files on the Ansible controller.
To iterate a list of files on a remote node, use the M(ansible.builtin.find) module.
- Returns a string list of paths joined by commas, or an empty list if no files match. For a 'true list' pass C(wantlist=True) to the lookup.
"""
EXAMPLES = """
- name: Display paths of all .txt files in dir
debug: msg={{ lookup('fileglob', '/my/path/*.txt') }}
- name: Copy each file over that matches the given pattern
copy:
src: "{{ item }}"
dest: "/etc/fooapp/"
owner: "root"
mode: 0600
with_fileglob:
- "/playbooks/files/fooapp/*"
"""
RETURN = """
_list:
description:
- list of files
type: list
elements: path
"""
import os
import glob
from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleFileNotFound
from ansible.module_utils._text import to_bytes, to_text
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
ret = []
for term in terms:
term_file = os.path.basename(term)
found_paths = []
if term_file != term:
found_paths.append(self.find_file_in_search_path(variables, 'files', os.path.dirname(term)))
else:
# no dir, just file, so use paths and 'files' paths instead
if 'ansible_search_path' in variables:
paths = variables['ansible_search_path']
else:
paths = [self.get_basedir(variables)]
for p in paths:
found_paths.append(os.path.join(p, 'files'))
found_paths.append(p)
for dwimmed_path in found_paths:
if dwimmed_path:
globbed = glob.glob(to_bytes(os.path.join(dwimmed_path, term_file), errors='surrogate_or_strict'))
ret.extend(to_text(g, errors='surrogate_or_strict') for g in globbed if os.path.isfile(g))
if ret:
break
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,873 |
fileglob lookup plugin returns inconsistent values
|
##### SUMMARY
The `fileglob` lookup plugin returns values inconsistently, depending on the order of arguments (terms) given it.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
fileglob
##### ANSIBLE VERSION
```paste below
ansible 2.10.3
config file = /Users/jpm/take/training/bugs/fileglob/ansible.cfg
configured module search path = ['/Users/jpm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jpm/take/training/bugs/fileglob/env.v3/lib/python3.8/site-packages/ansible
executable location = /Users/jpm/take/training/bugs/fileglob/env.v3/bin/ansible
python version = 3.8.5 (default, Sep 27 2020, 11:38:54) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/jpm/take/training/bugs/fileglob/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS Catalina (10.15.7)
##### STEPS TO REPRODUCE
Run the example playbook
```yaml
- hosts: localhost
connection: local
gather_facts: false
vars:
dir: files
tasks:
- file: path='{{ dir }}' state=directory
- file: path='setvars.bat' state=touch # in current directory!
- file: path='{{ dir }}/{{ item }}' state=touch
loop:
- json.c
- strlcpy.c
- base64.c
- json.h
- base64.h
- strlcpy.h
- jo.c
- debug:
msg: '{{ query("fileglob", "setvars.bat", "{{ dir }}/*.[ch]" ) }}'
- debug:
msg: '{{ query("fileglob", "{{ dir }}/*.[ch]", "setvars.bat" ) }}'
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expected the path `setvars.bat` to be contained in both result sets, but it's in the first only. The results are identical if I use `lookup()` instead of `query()` and when I use `with_fileglob:`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] ***************************************************************
TASK [file] ********************************************************************
ok: [localhost]
TASK [file] ********************************************************************
changed: [localhost] => (item=json.c)
changed: [localhost] => (item=strlcpy.c)
changed: [localhost] => (item=base64.c)
changed: [localhost] => (item=json.h)
changed: [localhost] => (item=base64.h)
changed: [localhost] => (item=strlcpy.h)
changed: [localhost] => (item=jo.c)
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/setvars.bat",
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72873
|
https://github.com/ansible/ansible/pull/72879
|
e97f333532cc1bc140b2377237f0f4e85b2b5f6f
|
fe17cb6eba0df9e6c19e6dd8a78701fca6fa70e4
| 2020-12-06T11:26:14Z |
python
| 2020-12-08T15:31:34Z |
test/integration/targets/lookup_fileglob/issue72873/test.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,873 |
fileglob lookup plugin returns inconsistent values
|
##### SUMMARY
The `fileglob` lookup plugin returns values inconsistently, depending on the order of arguments (terms) given it.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
fileglob
##### ANSIBLE VERSION
```paste below
ansible 2.10.3
config file = /Users/jpm/take/training/bugs/fileglob/ansible.cfg
configured module search path = ['/Users/jpm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jpm/take/training/bugs/fileglob/env.v3/lib/python3.8/site-packages/ansible
executable location = /Users/jpm/take/training/bugs/fileglob/env.v3/bin/ansible
python version = 3.8.5 (default, Sep 27 2020, 11:38:54) [Clang 12.0.0 (clang-1200.0.32.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/jpm/take/training/bugs/fileglob/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS Catalina (10.15.7)
##### STEPS TO REPRODUCE
Run the example playbook
```yaml
- hosts: localhost
connection: local
gather_facts: false
vars:
dir: files
tasks:
- file: path='{{ dir }}' state=directory
- file: path='setvars.bat' state=touch # in current directory!
- file: path='{{ dir }}/{{ item }}' state=touch
loop:
- json.c
- strlcpy.c
- base64.c
- json.h
- base64.h
- strlcpy.h
- jo.c
- debug:
msg: '{{ query("fileglob", "setvars.bat", "{{ dir }}/*.[ch]" ) }}'
- debug:
msg: '{{ query("fileglob", "{{ dir }}/*.[ch]", "setvars.bat" ) }}'
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expected the path `setvars.bat` to be contained in both result sets, but it's in the first only. The results are identical if I use `lookup()` instead of `query()` and when I use `with_fileglob:`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] ***************************************************************
TASK [file] ********************************************************************
ok: [localhost]
TASK [file] ********************************************************************
changed: [localhost] => (item=json.c)
changed: [localhost] => (item=strlcpy.c)
changed: [localhost] => (item=base64.c)
changed: [localhost] => (item=json.h)
changed: [localhost] => (item=base64.h)
changed: [localhost] => (item=strlcpy.h)
changed: [localhost] => (item=jo.c)
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/setvars.bat",
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"/Users/jpm/take/training/bugs/fileglob/files/json.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.c",
"/Users/jpm/take/training/bugs/fileglob/files/base64.h",
"/Users/jpm/take/training/bugs/fileglob/files/json.h",
"/Users/jpm/take/training/bugs/fileglob/files/base64.c",
"/Users/jpm/take/training/bugs/fileglob/files/strlcpy.h",
"/Users/jpm/take/training/bugs/fileglob/files/jo.c"
]
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72873
|
https://github.com/ansible/ansible/pull/72879
|
e97f333532cc1bc140b2377237f0f4e85b2b5f6f
|
fe17cb6eba0df9e6c19e6dd8a78701fca6fa70e4
| 2020-12-06T11:26:14Z |
python
| 2020-12-08T15:31:34Z |
test/integration/targets/lookup_fileglob/runme.sh
|
#!/usr/bin/env bash
set -eux
# fun multilevel finds
for seed in play_adj play_adj_subdir somepath/play_adj_subsubdir in_role otherpath/in_role_subdir
do
ansible-playbook find_levels/play.yml -e "seed='${seed}'" "$@"
done
# non-existent paths
for seed in foo foo/bar foo/bar/baz
do
ansible-playbook non_existent/play.yml -e "seed='${seed}'" "$@"
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,900 |
ansible-inventory fails, ansible_base distribution not found
|
##### SUMMARY
Trying to do `ansible-inventory` with an inventory plugin that doesn't appear to do anything wrong results in traceback saying that it can't find the ansible_base distribution.
(I am aware that the python package name for Ansible itself changed `ansible` -> `ansible-base` -> `ansible-core`)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
```paste below
(my_linode) [alancoding@alan-red-hat test]$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
```
##### CONFIGURATION
Defaults
##### OS / ENVIRONMENT
Fedora 33
##### STEPS TO REPRODUCE
I have also replicated in the https://quay.io/repository/ansible/ansible-runner image, this is my replication as confirmed on my local machine.
The folder `~/repos/test-playbooks` is cloned from https://github.com/ansible/test-playbooks/
The requirements file is designed to be sure that I pick up all recent bug fixes in the collection, contents of `req.yml`
```yaml
collections:
- name: https://github.com/ansible-collections/community.general.git
type: git
version: main
```
Reproduction steps:
```
mkdir test
cd tes
python3 -m venv my_linode
source my_linode/bin/activate
pip3 install linode_api4
pip3 install --no-cache-dir https://github.com/ansible/ansible/archive/devel.tar.gz
ansible-galaxy collection install -r req.yml
ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
```
##### EXPECTED RESULTS
I expect to get an error from the linode library along the lines of "You are not authenticated because you did not provide a token"
##### ACTUAL RESULTS
```
(my_linode) [alancoding@alan-red-hat test]$ ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible-inventory 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible-inventory
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
No config file found; using defaults
host_list declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
script declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
toml declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with auto plugin: The
'ansible_base' distribution was not found and is required by the application
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 50, in parse
plugin = inventory_loader.get(plugin_name)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 779, in get
return self.get_with_context(name, *args, **kwargs).object
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 799, in get_with_context
self._module_cache[path] = self._load_module_source(name, path)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 763, in _load_module_source
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/alancoding/.ansible/collections/ansible_collections/community/general/plugins/inventory/linode.py", line 64, in <module>
from linode_api4 import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/__init__.py", line 3, in <module>
from linode_api4.linode_client import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/linode_client.py", line 6, in <module>
import pkg_resources
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3257, in <module>
def _initialize_master_working_set():
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3240, in _call_aside
f(*args, **kwargs)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3269, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 582, in _build_master
ws.require(__requires__)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 899, in require
needed = self.resolve(parse_requirements(requirements))
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 785, in resolve
raise DistributionNotFound(req, requirers)
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with yaml plugin: Plugin
configuration YAML file, not YAML inventory
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with ini plugin: Invalid
host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port.
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/ini.py", line 136, in parse
raise AnsibleParserError(e)
[WARNING]: Unable to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
```
I could not find any existing issues along the lines of "ansible_base distribution was not found ", which seems pretty distinctive.
|
https://github.com/ansible/ansible/issues/72900
|
https://github.com/ansible/ansible/pull/72906
|
57c2cc7c7748fb2a315f7e436c84c1fc0f1a03c8
|
6bc1e9f5dd98ec4e700015ee91c08f4ce82831fe
| 2020-12-08T15:05:04Z |
python
| 2020-12-08T18:22:55Z |
lib/ansible/cli/scripts/ansible_cli_stub.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__requires__ = ['ansible_base']
import errno
import os
import shutil
import sys
import traceback
from ansible import context
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.module_utils._text import to_text
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY38_MIN = sys.version_info[:2] >= (3, 8)
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
raise SystemExit('ERROR: Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s' % ''.join(sys.version.splitlines()))
class LastResort(object):
# OUTPUT OF LAST RESORT
def display(self, msg, log_only=None):
print(msg, file=sys.stderr)
def error(self, msg, wrap_text=None):
print(msg, file=sys.stderr)
if __name__ == '__main__':
display = LastResort()
try: # bad ANSIBLE_CONFIG or config options can force ugly stacktrace
import ansible.constants as C
from ansible.utils.display import Display, initialize_locale
except AnsibleOptionsError as e:
display.error(to_text(e), wrap_text=False)
sys.exit(5)
initialize_locale()
cli = None
me = os.path.basename(sys.argv[0])
try:
display = Display()
if C.CONTROLLER_PYTHON_WARNING and not _PY38_MIN:
display.deprecated(
(
'Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. '
'Current version: %s' % ''.join(sys.version.splitlines())
),
version='2.12',
collection_name='ansible.builtin',
)
display.debug("starting run")
sub = None
target = me.split('-')
if target[-1][0].isdigit():
# Remove any version or python version info as downstreams
# sometimes add that
target = target[:-1]
if len(target) > 1:
sub = target[1]
myclass = "%sCLI" % sub.capitalize()
elif target[0] == 'ansible':
sub = 'adhoc'
myclass = 'AdHocCLI'
else:
raise AnsibleError("Unknown Ansible alias: %s" % me)
try:
mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
except ImportError as e:
# ImportError members have changed in py3
if 'msg' in dir(e):
msg = e.msg
else:
msg = e.message
if msg.endswith(' %s' % sub):
raise AnsibleError("Ansible sub-program not implemented: %s" % me)
else:
raise
b_ansible_dir = os.path.expanduser(os.path.expandvars(b"~/.ansible"))
try:
os.mkdir(b_ansible_dir, 0o700)
except OSError as exc:
if exc.errno != errno.EEXIST:
display.warning("Failed to create the directory '%s': %s"
% (to_text(b_ansible_dir, errors='surrogate_or_replace'),
to_text(exc, errors='surrogate_or_replace')))
else:
display.debug("Created the '%s' directory" % to_text(b_ansible_dir, errors='surrogate_or_replace'))
try:
args = [to_text(a, errors='surrogate_or_strict') for a in sys.argv]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = mycli(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,900 |
ansible-inventory fails, ansible_base distribution not found
|
##### SUMMARY
Trying to do `ansible-inventory` with an inventory plugin that doesn't appear to do anything wrong results in traceback saying that it can't find the ansible_base distribution.
(I am aware that the python package name for Ansible itself changed `ansible` -> `ansible-base` -> `ansible-core`)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
```paste below
(my_linode) [alancoding@alan-red-hat test]$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
```
##### CONFIGURATION
Defaults
##### OS / ENVIRONMENT
Fedora 33
##### STEPS TO REPRODUCE
I have also replicated in the https://quay.io/repository/ansible/ansible-runner image, this is my replication as confirmed on my local machine.
The folder `~/repos/test-playbooks` is cloned from https://github.com/ansible/test-playbooks/
The requirements file is designed to be sure that I pick up all recent bug fixes in the collection, contents of `req.yml`
```yaml
collections:
- name: https://github.com/ansible-collections/community.general.git
type: git
version: main
```
Reproduction steps:
```
mkdir test
cd tes
python3 -m venv my_linode
source my_linode/bin/activate
pip3 install linode_api4
pip3 install --no-cache-dir https://github.com/ansible/ansible/archive/devel.tar.gz
ansible-galaxy collection install -r req.yml
ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
```
##### EXPECTED RESULTS
I expect to get an error from the linode library along the lines of "You are not authenticated because you did not provide a token"
##### ACTUAL RESULTS
```
(my_linode) [alancoding@alan-red-hat test]$ ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible-inventory 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible-inventory
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
No config file found; using defaults
host_list declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
script declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
toml declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with auto plugin: The
'ansible_base' distribution was not found and is required by the application
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 50, in parse
plugin = inventory_loader.get(plugin_name)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 779, in get
return self.get_with_context(name, *args, **kwargs).object
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 799, in get_with_context
self._module_cache[path] = self._load_module_source(name, path)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 763, in _load_module_source
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/alancoding/.ansible/collections/ansible_collections/community/general/plugins/inventory/linode.py", line 64, in <module>
from linode_api4 import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/__init__.py", line 3, in <module>
from linode_api4.linode_client import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/linode_client.py", line 6, in <module>
import pkg_resources
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3257, in <module>
def _initialize_master_working_set():
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3240, in _call_aside
f(*args, **kwargs)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3269, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 582, in _build_master
ws.require(__requires__)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 899, in require
needed = self.resolve(parse_requirements(requirements))
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 785, in resolve
raise DistributionNotFound(req, requirers)
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with yaml plugin: Plugin
configuration YAML file, not YAML inventory
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with ini plugin: Invalid
host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port.
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/ini.py", line 136, in parse
raise AnsibleParserError(e)
[WARNING]: Unable to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
```
I could not find any existing issues along the lines of "ansible_base distribution was not found ", which seems pretty distinctive.
|
https://github.com/ansible/ansible/issues/72900
|
https://github.com/ansible/ansible/pull/72906
|
57c2cc7c7748fb2a315f7e436c84c1fc0f1a03c8
|
6bc1e9f5dd98ec4e700015ee91c08f4ce82831fe
| 2020-12-08T15:05:04Z |
python
| 2020-12-08T18:22:55Z |
lib/ansible/cli/scripts/ansible_connection_cli_stub.py
|
#!/usr/bin/env python
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__requires__ = ['ansible_base']
import fcntl
import hashlib
import os
import signal
import socket
import sys
import time
import traceback
import errno
import json
from contextlib import contextmanager
from ansible import constants as C
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves import cPickle, StringIO
from ansible.module_utils.connection import Connection, ConnectionError, send_data, recv_data
from ansible.module_utils.service import fork_process
from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import connection_loader
from ansible.utils.path import unfrackpath, makedirs_safe
from ansible.utils.display import Display
from ansible.utils.jsonrpc import JsonRpcServer
def read_stream(byte_stream):
size = int(byte_stream.readline().strip())
data = byte_stream.read(size)
if len(data) < size:
raise Exception("EOF found before data was complete")
data_hash = to_text(byte_stream.readline().strip())
if data_hash != hashlib.sha1(data).hexdigest():
raise Exception("Read {0} bytes, but data did not match checksum".format(size))
# restore escaped loose \r characters
data = data.replace(br'\r', b'\r')
return data
@contextmanager
def file_lock(lock_path):
"""
Uses contextmanager to create and release a file lock based on the
given path. This allows us to create locks using `with file_lock()`
to prevent deadlocks related to failure to unlock properly.
"""
lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT, 0o600)
fcntl.lockf(lock_fd, fcntl.LOCK_EX)
yield
fcntl.lockf(lock_fd, fcntl.LOCK_UN)
os.close(lock_fd)
class ConnectionProcess(object):
'''
The connection process wraps around a Connection object that manages
the connection to a remote device that persists over the playbook
'''
def __init__(self, fd, play_context, socket_path, original_path, task_uuid=None, ansible_playbook_pid=None):
self.play_context = play_context
self.socket_path = socket_path
self.original_path = original_path
self._task_uuid = task_uuid
self.fd = fd
self.exception = None
self.srv = JsonRpcServer()
self.sock = None
self.connection = None
self._ansible_playbook_pid = ansible_playbook_pid
def start(self, variables):
try:
messages = list()
result = {}
messages.append(('vvvv', 'control socket path is %s' % self.socket_path))
# If this is a relative path (~ gets expanded later) then plug the
# key's path on to the directory we originally came from, so we can
# find it now that our cwd is /
if self.play_context.private_key_file and self.play_context.private_key_file[0] not in '~/':
self.play_context.private_key_file = os.path.join(self.original_path, self.play_context.private_key_file)
self.connection = connection_loader.get(self.play_context.connection, self.play_context, '/dev/null',
task_uuid=self._task_uuid, ansible_playbook_pid=self._ansible_playbook_pid)
self.connection.set_options(var_options=variables)
self.connection._socket_path = self.socket_path
self.srv.register(self.connection)
messages.extend([('vvvv', msg) for msg in sys.stdout.getvalue().splitlines()])
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.bind(self.socket_path)
self.sock.listen(1)
messages.append(('vvvv', 'local domain socket listeners started successfully'))
except Exception as exc:
messages.extend(self.connection.pop_messages())
result['error'] = to_text(exc)
result['exception'] = traceback.format_exc()
finally:
result['messages'] = messages
self.fd.write(json.dumps(result, cls=AnsibleJSONEncoder))
self.fd.close()
def run(self):
try:
while not self.connection._conn_closed:
signal.signal(signal.SIGALRM, self.connect_timeout)
signal.signal(signal.SIGTERM, self.handler)
signal.alarm(self.connection.get_option('persistent_connect_timeout'))
self.exception = None
(s, addr) = self.sock.accept()
signal.alarm(0)
signal.signal(signal.SIGALRM, self.command_timeout)
while True:
data = recv_data(s)
if not data:
break
log_messages = self.connection.get_option('persistent_log_messages')
if log_messages:
display.display("jsonrpc request: %s" % data, log_only=True)
request = json.loads(to_text(data, errors='surrogate_or_strict'))
if request.get('method') == "exec_command" and not self.connection.connected:
self.connection._connect()
signal.alarm(self.connection.get_option('persistent_command_timeout'))
resp = self.srv.handle_request(data)
signal.alarm(0)
if log_messages:
display.display("jsonrpc response: %s" % resp, log_only=True)
send_data(s, to_bytes(resp))
s.close()
except Exception as e:
# socket.accept() will raise EINTR if the socket.close() is called
if hasattr(e, 'errno'):
if e.errno != errno.EINTR:
self.exception = traceback.format_exc()
else:
self.exception = traceback.format_exc()
finally:
# allow time for any exception msg send over socket to receive at other end before shutting down
time.sleep(0.1)
# when done, close the connection properly and cleanup the socket file so it can be recreated
self.shutdown()
def connect_timeout(self, signum, frame):
msg = 'persistent connection idle timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and ' \
'Troubleshooting Guide.' % self.connection.get_option('persistent_connect_timeout')
display.display(msg, log_only=True)
raise Exception(msg)
def command_timeout(self, signum, frame):
msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\
% self.connection.get_option('persistent_command_timeout')
display.display(msg, log_only=True)
raise Exception(msg)
def handler(self, signum, frame):
msg = 'signal handler called with signal %s.' % signum
display.display(msg, log_only=True)
raise Exception(msg)
def shutdown(self):
""" Shuts down the local domain socket
"""
lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(self.socket_path))
if os.path.exists(self.socket_path):
try:
if self.sock:
self.sock.close()
if self.connection:
self.connection.close()
if self.connection.get_option("persistent_log_messages"):
for _level, message in self.connection.pop_messages():
display.display(message, log_only=True)
except Exception:
pass
finally:
if os.path.exists(self.socket_path):
os.remove(self.socket_path)
setattr(self.connection, '_socket_path', None)
setattr(self.connection, '_connected', False)
if os.path.exists(lock_path):
os.remove(lock_path)
display.display('shutdown complete', log_only=True)
def main():
""" Called to initiate the connect to the remote device
"""
rc = 0
result = {}
messages = list()
socket_path = None
# Need stdin as a byte stream
if PY3:
stdin = sys.stdin.buffer
else:
stdin = sys.stdin
# Note: update the below log capture code after Display.display() is refactored.
saved_stdout = sys.stdout
sys.stdout = StringIO()
try:
# read the play context data via stdin, which means depickling it
vars_data = read_stream(stdin)
init_data = read_stream(stdin)
if PY3:
pc_data = cPickle.loads(init_data, encoding='bytes')
variables = cPickle.loads(vars_data, encoding='bytes')
else:
pc_data = cPickle.loads(init_data)
variables = cPickle.loads(vars_data)
play_context = PlayContext()
play_context.deserialize(pc_data)
display.verbosity = play_context.verbosity
except Exception as e:
rc = 1
result.update({
'error': to_text(e),
'exception': traceback.format_exc()
})
if rc == 0:
ssh = connection_loader.get('ssh', class_only=True)
ansible_playbook_pid = sys.argv[1]
task_uuid = sys.argv[2]
cp = ssh._create_control_path(play_context.remote_addr, play_context.port, play_context.remote_user, play_context.connection, ansible_playbook_pid)
# create the persistent connection dir if need be and create the paths
# which we will be using later
tmp_path = unfrackpath(C.PERSISTENT_CONTROL_PATH_DIR)
makedirs_safe(tmp_path)
socket_path = unfrackpath(cp % dict(directory=tmp_path))
lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(socket_path))
with file_lock(lock_path):
if not os.path.exists(socket_path):
messages.append(('vvvv', 'local domain socket does not exist, starting it'))
original_path = os.getcwd()
r, w = os.pipe()
pid = fork_process()
if pid == 0:
try:
os.close(r)
wfd = os.fdopen(w, 'w')
process = ConnectionProcess(wfd, play_context, socket_path, original_path, task_uuid, ansible_playbook_pid)
process.start(variables)
except Exception:
messages.append(('error', traceback.format_exc()))
rc = 1
if rc == 0:
process.run()
else:
process.shutdown()
sys.exit(rc)
else:
os.close(w)
rfd = os.fdopen(r, 'r')
data = json.loads(rfd.read(), cls=AnsibleJSONDecoder)
messages.extend(data.pop('messages'))
result.update(data)
else:
messages.append(('vvvv', 'found existing local domain socket, using it!'))
conn = Connection(socket_path)
conn.set_options(var_options=variables)
pc_data = to_text(init_data)
try:
conn.update_play_context(pc_data)
conn.set_check_prompt(task_uuid)
except Exception as exc:
# Only network_cli has update_play context and set_check_prompt, so missing this is
# not fatal e.g. netconf
if isinstance(exc, ConnectionError) and getattr(exc, 'code', None) == -32601:
pass
else:
result.update({
'error': to_text(exc),
'exception': traceback.format_exc()
})
if os.path.exists(socket_path):
messages.extend(Connection(socket_path).pop_messages())
messages.append(('vvvv', sys.stdout.getvalue()))
result.update({
'messages': messages,
'socket_path': socket_path
})
sys.stdout = saved_stdout
if 'exception' in result:
rc = 1
sys.stderr.write(json.dumps(result, cls=AnsibleJSONEncoder))
else:
rc = 0
sys.stdout.write(json.dumps(result, cls=AnsibleJSONEncoder))
sys.exit(rc)
if __name__ == '__main__':
display = Display()
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,900 |
ansible-inventory fails, ansible_base distribution not found
|
##### SUMMARY
Trying to do `ansible-inventory` with an inventory plugin that doesn't appear to do anything wrong results in traceback saying that it can't find the ansible_base distribution.
(I am aware that the python package name for Ansible itself changed `ansible` -> `ansible-base` -> `ansible-core`)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
```paste below
(my_linode) [alancoding@alan-red-hat test]$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
```
##### CONFIGURATION
Defaults
##### OS / ENVIRONMENT
Fedora 33
##### STEPS TO REPRODUCE
I have also replicated in the https://quay.io/repository/ansible/ansible-runner image, this is my replication as confirmed on my local machine.
The folder `~/repos/test-playbooks` is cloned from https://github.com/ansible/test-playbooks/
The requirements file is designed to be sure that I pick up all recent bug fixes in the collection, contents of `req.yml`
```yaml
collections:
- name: https://github.com/ansible-collections/community.general.git
type: git
version: main
```
Reproduction steps:
```
mkdir test
cd tes
python3 -m venv my_linode
source my_linode/bin/activate
pip3 install linode_api4
pip3 install --no-cache-dir https://github.com/ansible/ansible/archive/devel.tar.gz
ansible-galaxy collection install -r req.yml
ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
```
##### EXPECTED RESULTS
I expect to get an error from the linode library along the lines of "You are not authenticated because you did not provide a token"
##### ACTUAL RESULTS
```
(my_linode) [alancoding@alan-red-hat test]$ ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible-inventory 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible-inventory
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
No config file found; using defaults
host_list declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
script declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
toml declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with auto plugin: The
'ansible_base' distribution was not found and is required by the application
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 50, in parse
plugin = inventory_loader.get(plugin_name)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 779, in get
return self.get_with_context(name, *args, **kwargs).object
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 799, in get_with_context
self._module_cache[path] = self._load_module_source(name, path)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 763, in _load_module_source
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/alancoding/.ansible/collections/ansible_collections/community/general/plugins/inventory/linode.py", line 64, in <module>
from linode_api4 import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/__init__.py", line 3, in <module>
from linode_api4.linode_client import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/linode_client.py", line 6, in <module>
import pkg_resources
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3257, in <module>
def _initialize_master_working_set():
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3240, in _call_aside
f(*args, **kwargs)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3269, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 582, in _build_master
ws.require(__requires__)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 899, in require
needed = self.resolve(parse_requirements(requirements))
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 785, in resolve
raise DistributionNotFound(req, requirers)
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with yaml plugin: Plugin
configuration YAML file, not YAML inventory
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with ini plugin: Invalid
host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port.
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/ini.py", line 136, in parse
raise AnsibleParserError(e)
[WARNING]: Unable to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
```
I could not find any existing issues along the lines of "ansible_base distribution was not found ", which seems pretty distinctive.
|
https://github.com/ansible/ansible/issues/72900
|
https://github.com/ansible/ansible/pull/72906
|
57c2cc7c7748fb2a315f7e436c84c1fc0f1a03c8
|
6bc1e9f5dd98ec4e700015ee91c08f4ce82831fe
| 2020-12-08T15:05:04Z |
python
| 2020-12-08T18:22:55Z |
test/lib/ansible_test/_internal/executor.py
|
"""Execute Ansible tests."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import datetime
import re
import time
import textwrap
import functools
import hashlib
import difflib
import filecmp
import random
import string
import shutil
from . import types as t
from .thread import (
WrappedThread,
)
from .core_ci import (
AnsibleCoreCI,
SshKey,
)
from .manage_ci import (
ManageWindowsCI,
ManageNetworkCI,
)
from .cloud import (
cloud_filter,
cloud_init,
get_cloud_environment,
get_cloud_platforms,
CloudEnvironmentConfig,
)
from .io import (
make_dirs,
open_text_file,
read_binary_file,
read_text_file,
write_text_file,
)
from .util import (
ApplicationWarning,
ApplicationError,
SubprocessError,
display,
remove_tree,
find_executable,
raw_command,
get_available_port,
generate_pip_command,
find_python,
cmd_quote,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_DATA_ROOT,
ANSIBLE_TEST_CONFIG_ROOT,
get_ansible_version,
tempdir,
open_zipfile,
SUPPORTED_PYTHON_VERSIONS,
str_to_version,
)
from .util_common import (
get_docker_completion,
get_network_settings,
get_remote_completion,
get_python_path,
intercept_command,
named_temporary_file,
run_command,
write_json_test_results,
ResultType,
handle_layout_messages,
)
from .docker_util import (
docker_pull,
docker_run,
docker_available,
docker_rm,
get_docker_container_id,
get_docker_container_ip,
get_docker_hostname,
get_docker_preferred_network_name,
is_docker_user_defined_network,
)
from .ansible_util import (
ansible_environment,
check_pyyaml,
)
from .target import (
IntegrationTarget,
walk_internal_targets,
walk_posix_integration_targets,
walk_network_integration_targets,
walk_windows_integration_targets,
TIntegrationTarget,
)
from .ci import (
get_ci_provider,
)
from .classification import (
categorize_changes,
)
from .config import (
TestConfig,
EnvironmentConfig,
IntegrationConfig,
NetworkIntegrationConfig,
PosixIntegrationConfig,
ShellConfig,
WindowsIntegrationConfig,
TIntegrationConfig,
)
from .metadata import (
ChangeDescription,
)
from .integration import (
integration_test_environment,
integration_test_config_file,
setup_common_temp_dir,
get_inventory_relative_path,
check_inventory,
delegate_inventory,
)
from .data import (
data_context,
)
HTTPTESTER_HOSTS = (
'ansible.http.tests',
'sni1.ansible.http.tests',
'fail.ansible.http.tests',
)
def check_startup():
"""Checks to perform at startup before running commands."""
check_legacy_modules()
def check_legacy_modules():
"""Detect conflicts with legacy core/extras module directories to avoid problems later."""
for directory in 'core', 'extras':
path = 'lib/ansible/modules/%s' % directory
for root, _dir_names, file_names in os.walk(path):
if file_names:
# the directory shouldn't exist, but if it does, it must contain no files
raise ApplicationError('Files prohibited in "%s". '
'These are most likely legacy modules from version 2.2 or earlier.' % root)
def create_shell_command(command):
"""
:type command: list[str]
:rtype: list[str]
"""
optional_vars = (
'TERM',
)
cmd = ['/usr/bin/env']
cmd += ['%s=%s' % (var, os.environ[var]) for var in optional_vars if var in os.environ]
cmd += command
return cmd
def get_setuptools_version(args, python): # type: (EnvironmentConfig, str) -> t.Tuple[int]
"""Return the setuptools version for the given python."""
try:
return str_to_version(raw_command([python, '-c', 'import setuptools; print(setuptools.__version__)'], capture=True)[0])
except SubprocessError:
if args.explain:
return tuple() # ignore errors in explain mode in case setuptools is not aleady installed
raise
def get_cryptography_requirement(args, python_version): # type: (EnvironmentConfig, str) -> str
"""
Return the correct cryptography requirement for the given python version.
The version of cryptograpy installed depends on the python version and setuptools version.
"""
python = find_python(python_version)
setuptools_version = get_setuptools_version(args, python)
if setuptools_version >= (18, 5):
if python_version == '2.6':
# cryptography 2.2+ requires python 2.7+
# see https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst#22---2018-03-19
cryptography = 'cryptography < 2.2'
else:
cryptography = 'cryptography'
else:
# cryptography 2.1+ requires setuptools 18.5+
# see https://github.com/pyca/cryptography/blob/62287ae18383447585606b9d0765c0f1b8a9777c/setup.py#L26
cryptography = 'cryptography < 2.1'
return cryptography
def install_command_requirements(args, python_version=None, context=None, enable_pyyaml_check=False):
"""
:type args: EnvironmentConfig
:type python_version: str | None
:type context: str | None
:type enable_pyyaml_check: bool
"""
if not args.explain:
make_dirs(ResultType.COVERAGE.path)
make_dirs(ResultType.DATA.path)
if isinstance(args, ShellConfig):
if args.raw:
return
generate_egg_info(args)
if not args.requirements:
return
if isinstance(args, ShellConfig):
return
packages = []
if isinstance(args, TestConfig):
if args.coverage:
packages.append('coverage')
if args.junit:
packages.append('junit-xml')
if not python_version:
python_version = args.python_version
pip = generate_pip_command(find_python(python_version))
# skip packages which have aleady been installed for python_version
try:
package_cache = install_command_requirements.package_cache
except AttributeError:
package_cache = install_command_requirements.package_cache = {}
installed_packages = package_cache.setdefault(python_version, set())
skip_packages = [package for package in packages if package in installed_packages]
for package in skip_packages:
packages.remove(package)
installed_packages.update(packages)
if args.command != 'sanity':
install_ansible_test_requirements(args, pip)
# make sure setuptools is available before trying to install cryptography
# the installed version of setuptools affects the version of cryptography to install
run_command(args, generate_pip_install(pip, '', packages=['setuptools']))
# install the latest cryptography version that the current requirements can support
# use a custom constraints file to avoid the normal constraints file overriding the chosen version of cryptography
# if not installed here later install commands may try to install an unsupported version due to the presence of older setuptools
# this is done instead of upgrading setuptools to allow tests to function with older distribution provided versions of setuptools
run_command(args, generate_pip_install(pip, '',
packages=[get_cryptography_requirement(args, python_version)],
constraints=os.path.join(ANSIBLE_TEST_DATA_ROOT, 'cryptography-constraints.txt')))
commands = [generate_pip_install(pip, args.command, packages=packages, context=context)]
if isinstance(args, IntegrationConfig):
for cloud_platform in get_cloud_platforms(args):
commands.append(generate_pip_install(pip, '%s.cloud.%s' % (args.command, cloud_platform)))
commands = [cmd for cmd in commands if cmd]
if not commands:
return # no need to detect changes or run pip check since we are not making any changes
# only look for changes when more than one requirements file is needed
detect_pip_changes = len(commands) > 1
# first pass to install requirements, changes expected unless environment is already set up
install_ansible_test_requirements(args, pip)
changes = run_pip_commands(args, pip, commands, detect_pip_changes)
if changes:
# second pass to check for conflicts in requirements, changes are not expected here
changes = run_pip_commands(args, pip, commands, detect_pip_changes)
if changes:
raise ApplicationError('Conflicts detected in requirements. The following commands reported changes during verification:\n%s' %
'\n'.join((' '.join(cmd_quote(c) for c in cmd) for cmd in changes)))
if args.pip_check:
# ask pip to check for conflicts between installed packages
try:
run_command(args, pip + ['check', '--disable-pip-version-check'], capture=True)
except SubprocessError as ex:
if ex.stderr.strip() == 'ERROR: unknown command "check"':
display.warning('Cannot check pip requirements for conflicts because "pip check" is not supported.')
else:
raise
if enable_pyyaml_check:
# pyyaml may have been one of the requirements that was installed, so perform an optional check for it
check_pyyaml(args, python_version, required=False)
def install_ansible_test_requirements(args, pip): # type: (EnvironmentConfig, t.List[str]) -> None
"""Install requirements for ansible-test for the given pip if not already installed."""
try:
installed = install_command_requirements.installed
except AttributeError:
installed = install_command_requirements.installed = set()
if tuple(pip) in installed:
return
# make sure basic ansible-test requirements are met, including making sure that pip is recent enough to support constraints
# virtualenvs created by older distributions may include very old pip versions, such as those created in the centos6 test container (pip 6.0.8)
run_command(args, generate_pip_install(pip, 'ansible-test', use_constraints=False))
installed.add(tuple(pip))
def run_pip_commands(args, pip, commands, detect_pip_changes=False):
"""
:type args: EnvironmentConfig
:type pip: list[str]
:type commands: list[list[str]]
:type detect_pip_changes: bool
:rtype: list[list[str]]
"""
changes = []
after_list = pip_list(args, pip) if detect_pip_changes else None
for cmd in commands:
if not cmd:
continue
before_list = after_list
run_command(args, cmd)
after_list = pip_list(args, pip) if detect_pip_changes else None
if before_list != after_list:
changes.append(cmd)
return changes
def pip_list(args, pip):
"""
:type args: EnvironmentConfig
:type pip: list[str]
:rtype: str
"""
stdout = run_command(args, pip + ['list'], capture=True)[0]
return stdout
def generate_egg_info(args):
"""
:type args: EnvironmentConfig
"""
if args.explain:
return
ansible_version = get_ansible_version()
# inclusion of the version number in the path is optional
# see: https://setuptools.readthedocs.io/en/latest/formats.html#filename-embedded-metadata
egg_info_path = ANSIBLE_LIB_ROOT + '_base-%s.egg-info' % ansible_version
if os.path.exists(egg_info_path):
return
egg_info_path = ANSIBLE_LIB_ROOT + '_base.egg-info'
if os.path.exists(egg_info_path):
return
# minimal PKG-INFO stub following the format defined in PEP 241
# required for older setuptools versions to avoid a traceback when importing pkg_resources from packages like cryptography
# newer setuptools versions are happy with an empty directory
# including a stub here means we don't need to locate the existing file or have setup.py generate it when running from source
pkg_info = '''
Metadata-Version: 1.0
Name: ansible
Version: %s
Platform: UNKNOWN
Summary: Radically simple IT automation
Author-email: [email protected]
License: GPLv3+
''' % get_ansible_version()
pkg_info_path = os.path.join(egg_info_path, 'PKG-INFO')
write_text_file(pkg_info_path, pkg_info.lstrip(), create_directories=True)
def generate_pip_install(pip, command, packages=None, constraints=None, use_constraints=True, context=None):
"""
:type pip: list[str]
:type command: str
:type packages: list[str] | None
:type constraints: str | None
:type use_constraints: bool
:type context: str | None
:rtype: list[str] | None
"""
constraints = constraints or os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', 'constraints.txt')
requirements = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', '%s.txt' % ('%s.%s' % (command, context) if context else command))
content_constraints = None
options = []
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
if command == 'sanity' and data_context().content.is_ansible:
requirements = os.path.join(data_context().content.sanity_path, 'code-smell', '%s.requirements.txt' % context)
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
if command == 'units':
requirements = os.path.join(data_context().content.unit_path, 'requirements.txt')
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
content_constraints = os.path.join(data_context().content.unit_path, 'constraints.txt')
if command in ('integration', 'windows-integration', 'network-integration'):
requirements = os.path.join(data_context().content.integration_path, 'requirements.txt')
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
requirements = os.path.join(data_context().content.integration_path, '%s.requirements.txt' % command)
if os.path.exists(requirements) and os.path.getsize(requirements):
options += ['-r', requirements]
content_constraints = os.path.join(data_context().content.integration_path, 'constraints.txt')
if command.startswith('integration.cloud.'):
content_constraints = os.path.join(data_context().content.integration_path, 'constraints.txt')
if packages:
options += packages
if not options:
return None
if use_constraints:
if content_constraints and os.path.exists(content_constraints) and os.path.getsize(content_constraints):
# listing content constraints first gives them priority over constraints provided by ansible-test
options.extend(['-c', content_constraints])
options.extend(['-c', constraints])
return pip + ['install', '--disable-pip-version-check'] + options
def command_shell(args):
"""
:type args: ShellConfig
"""
if args.delegate:
raise Delegate()
install_command_requirements(args)
if args.inject_httptester:
inject_httptester(args)
cmd = create_shell_command(['bash', '-i'])
run_command(args, cmd)
def command_posix_integration(args):
"""
:type args: PosixIntegrationConfig
"""
handle_layout_messages(data_context().content.integration_messages)
inventory_relative_path = get_inventory_relative_path(args)
inventory_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, os.path.basename(inventory_relative_path))
all_targets = tuple(walk_posix_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets)
command_integration_filtered(args, internal_targets, all_targets, inventory_path)
def command_network_integration(args):
"""
:type args: NetworkIntegrationConfig
"""
handle_layout_messages(data_context().content.integration_messages)
inventory_relative_path = get_inventory_relative_path(args)
template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template'
if args.inventory:
inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.inventory)
else:
inventory_path = os.path.join(data_context().content.root, inventory_relative_path)
if args.no_temp_workdir:
# temporary solution to keep DCI tests working
inventory_exists = os.path.exists(inventory_path)
else:
inventory_exists = os.path.isfile(inventory_path)
if not args.explain and not args.platform and not inventory_exists:
raise ApplicationError(
'Inventory not found: %s\n'
'Use --inventory to specify the inventory path.\n'
'Use --platform to provision resources and generate an inventory file.\n'
'See also inventory template: %s' % (inventory_path, template_path)
)
check_inventory(args, inventory_path)
delegate_inventory(args, inventory_path)
all_targets = tuple(walk_network_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=network_init)
instances = [] # type: t.List[WrappedThread]
if args.platform:
get_python_path(args, args.python_executable) # initialize before starting threads
configs = dict((config['platform_version'], config) for config in args.metadata.instance_config)
for platform_version in args.platform:
platform, version = platform_version.split('/', 1)
config = configs.get(platform_version)
if not config:
continue
instance = WrappedThread(functools.partial(network_run, args, platform, version, config))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
remotes = [instance.wait_for_result() for instance in instances]
inventory = network_inventory(remotes)
display.info('>>> Inventory: %s\n%s' % (inventory_path, inventory.strip()), verbosity=3)
if not args.explain:
write_text_file(inventory_path, inventory)
success = False
try:
command_integration_filtered(args, internal_targets, all_targets, inventory_path)
success = True
finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
for instance in instances:
instance.result.stop()
def network_init(args, internal_targets): # type: (NetworkIntegrationConfig, t.Tuple[IntegrationTarget, ...]) -> None
"""Initialize platforms for network integration tests."""
if not args.platform:
return
if args.metadata.instance_config is not None:
return
platform_targets = set(a for target in internal_targets for a in target.aliases if a.startswith('network/'))
instances = [] # type: t.List[WrappedThread]
# generate an ssh key (if needed) up front once, instead of for each instance
SshKey(args)
for platform_version in args.platform:
platform, version = platform_version.split('/', 1)
platform_target = 'network/%s/' % platform
if platform_target not in platform_targets:
display.warning('Skipping "%s" because selected tests do not target the "%s" platform.' % (
platform_version, platform))
continue
instance = WrappedThread(functools.partial(network_start, args, platform, version))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
args.metadata.instance_config = [instance.wait_for_result() for instance in instances]
def network_start(args, platform, version):
"""
:type args: NetworkIntegrationConfig
:type platform: str
:type version: str
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider)
core_ci.start()
return core_ci.save()
def network_run(args, platform, version, config):
"""
:type args: NetworkIntegrationConfig
:type platform: str
:type version: str
:type config: dict[str, str]
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider, load=False)
core_ci.load(config)
core_ci.wait()
manage = ManageNetworkCI(core_ci)
manage.wait()
return core_ci
def network_inventory(remotes):
"""
:type remotes: list[AnsibleCoreCI]
:rtype: str
"""
groups = dict([(remote.platform, []) for remote in remotes])
net = []
for remote in remotes:
options = dict(
ansible_host=remote.connection.hostname,
ansible_user=remote.connection.username,
ansible_ssh_private_key_file=os.path.abspath(remote.ssh_key.key),
)
settings = get_network_settings(remote.args, remote.platform, remote.version)
options.update(settings.inventory_vars)
groups[remote.platform].append(
'%s %s' % (
remote.name.replace('.', '-'),
' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)),
)
)
net.append(remote.platform)
groups['net:children'] = net
template = ''
for group in groups:
hosts = '\n'.join(groups[group])
template += textwrap.dedent("""
[%s]
%s
""") % (group, hosts)
inventory = template
return inventory
def command_windows_integration(args):
"""
:type args: WindowsIntegrationConfig
"""
handle_layout_messages(data_context().content.integration_messages)
inventory_relative_path = get_inventory_relative_path(args)
template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, os.path.basename(inventory_relative_path)) + '.template'
if args.inventory:
inventory_path = os.path.join(data_context().content.root, data_context().content.integration_path, args.inventory)
else:
inventory_path = os.path.join(data_context().content.root, inventory_relative_path)
if not args.explain and not args.windows and not os.path.isfile(inventory_path):
raise ApplicationError(
'Inventory not found: %s\n'
'Use --inventory to specify the inventory path.\n'
'Use --windows to provision resources and generate an inventory file.\n'
'See also inventory template: %s' % (inventory_path, template_path)
)
check_inventory(args, inventory_path)
delegate_inventory(args, inventory_path)
all_targets = tuple(walk_windows_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=windows_init)
instances = [] # type: t.List[WrappedThread]
pre_target = None
post_target = None
httptester_id = None
if args.windows:
get_python_path(args, args.python_executable) # initialize before starting threads
configs = dict((config['platform_version'], config) for config in args.metadata.instance_config)
for version in args.windows:
config = configs['windows/%s' % version]
instance = WrappedThread(functools.partial(windows_run, args, version, config))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
remotes = [instance.wait_for_result() for instance in instances]
inventory = windows_inventory(remotes)
display.info('>>> Inventory: %s\n%s' % (inventory_path, inventory.strip()), verbosity=3)
if not args.explain:
write_text_file(inventory_path, inventory)
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in internal_targets)
# if running under Docker delegation, the httptester may have already been started
docker_httptester = bool(os.environ.get("HTTPTESTER", False))
if use_httptester and not docker_available() and not docker_httptester:
display.warning('Assuming --disable-httptester since `docker` is not available.')
elif use_httptester:
if docker_httptester:
# we are running in a Docker container that is linked to the httptester container, we just need to
# forward these requests to the linked hostname
first_host = HTTPTESTER_HOSTS[0]
ssh_options = ["-R", "8080:%s:80" % first_host, "-R", "8443:%s:443" % first_host]
else:
# we are running directly and need to start the httptester container ourselves and forward the port
# from there manually set so HTTPTESTER env var is set during the run
args.inject_httptester = True
httptester_id, ssh_options = start_httptester(args)
# to get this SSH command to run in the background we need to set to run in background (-f) and disable
# the pty allocation (-T)
ssh_options.insert(0, "-fT")
# create a script that will continue to run in the background until the script is deleted, this will
# cleanup and close the connection
def forward_ssh_ports(target):
"""
:type target: IntegrationTarget
"""
if 'needs/httptester/' not in target.aliases:
return
for remote in [r for r in remotes if r.version != '2008']:
manage = ManageWindowsCI(remote)
manage.upload(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'setup', 'windows-httptester.ps1'), watcher_path)
# We cannot pass an array of string with -File so we just use a delimiter for multiple values
script = "powershell.exe -NoProfile -ExecutionPolicy Bypass -File .\\%s -Hosts \"%s\"" \
% (watcher_path, "|".join(HTTPTESTER_HOSTS))
if args.verbosity > 3:
script += " -Verbose"
manage.ssh(script, options=ssh_options, force_pty=False)
def cleanup_ssh_ports(target):
"""
:type target: IntegrationTarget
"""
if 'needs/httptester/' not in target.aliases:
return
for remote in [r for r in remotes if r.version != '2008']:
# delete the tmp file that keeps the http-tester alive
manage = ManageWindowsCI(remote)
manage.ssh("cmd.exe /c \"del %s /F /Q\"" % watcher_path, force_pty=False)
watcher_path = "ansible-test-http-watcher-%s.ps1" % time.time()
pre_target = forward_ssh_ports
post_target = cleanup_ssh_ports
def run_playbook(playbook, run_playbook_vars): # type: (str, t.Dict[str, t.Any]) -> None
playbook_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'playbooks', playbook)
command = ['ansible-playbook', '-i', inventory_path, playbook_path, '-e', json.dumps(run_playbook_vars)]
if args.verbosity:
command.append('-%s' % ('v' * args.verbosity))
env = ansible_environment(args)
intercept_command(args, command, '', env, disable_coverage=True)
remote_temp_path = None
if args.coverage and not args.coverage_check:
# Create the remote directory that is writable by everyone. Use Ansible to talk to the remote host.
remote_temp_path = 'C:\\ansible_test_coverage_%s' % time.time()
playbook_vars = {'remote_temp_path': remote_temp_path}
run_playbook('windows_coverage_setup.yml', playbook_vars)
success = False
try:
command_integration_filtered(args, internal_targets, all_targets, inventory_path, pre_target=pre_target,
post_target=post_target, remote_temp_path=remote_temp_path)
success = True
finally:
if httptester_id:
docker_rm(args, httptester_id)
if remote_temp_path:
# Zip up the coverage files that were generated and fetch it back to localhost.
with tempdir() as local_temp_path:
playbook_vars = {'remote_temp_path': remote_temp_path, 'local_temp_path': local_temp_path}
run_playbook('windows_coverage_teardown.yml', playbook_vars)
for filename in os.listdir(local_temp_path):
with open_zipfile(os.path.join(local_temp_path, filename)) as coverage_zip:
coverage_zip.extractall(ResultType.COVERAGE.path)
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
for instance in instances:
instance.result.stop()
# noinspection PyUnusedLocal
def windows_init(args, internal_targets): # pylint: disable=locally-disabled, unused-argument
"""
:type args: WindowsIntegrationConfig
:type internal_targets: tuple[IntegrationTarget]
"""
if not args.windows:
return
if args.metadata.instance_config is not None:
return
instances = [] # type: t.List[WrappedThread]
for version in args.windows:
instance = WrappedThread(functools.partial(windows_start, args, version))
instance.daemon = True
instance.start()
instances.append(instance)
while any(instance.is_alive() for instance in instances):
time.sleep(1)
args.metadata.instance_config = [instance.wait_for_result() for instance in instances]
def windows_start(args, version):
"""
:type args: WindowsIntegrationConfig
:type version: str
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider)
core_ci.start()
return core_ci.save()
def windows_run(args, version, config):
"""
:type args: WindowsIntegrationConfig
:type version: str
:type config: dict[str, str]
:rtype: AnsibleCoreCI
"""
core_ci = AnsibleCoreCI(args, 'windows', version, stage=args.remote_stage, provider=args.remote_provider, load=False)
core_ci.load(config)
core_ci.wait()
manage = ManageWindowsCI(core_ci)
manage.wait()
return core_ci
def windows_inventory(remotes):
"""
:type remotes: list[AnsibleCoreCI]
:rtype: str
"""
hosts = []
for remote in remotes:
options = dict(
ansible_host=remote.connection.hostname,
ansible_user=remote.connection.username,
ansible_password=remote.connection.password,
ansible_port=remote.connection.port,
)
# used for the connection_windows_ssh test target
if remote.ssh_key:
options["ansible_ssh_private_key_file"] = os.path.abspath(remote.ssh_key.key)
if remote.name == 'windows-2008':
options.update(
# force 2008 to use PSRP for the connection plugin
ansible_connection='psrp',
ansible_psrp_auth='basic',
ansible_psrp_cert_validation='ignore',
)
elif remote.name == 'windows-2016':
options.update(
# force 2016 to use NTLM + HTTP message encryption
ansible_connection='winrm',
ansible_winrm_server_cert_validation='ignore',
ansible_winrm_transport='ntlm',
ansible_winrm_scheme='http',
ansible_port='5985',
)
else:
options.update(
ansible_connection='winrm',
ansible_winrm_server_cert_validation='ignore',
)
hosts.append(
'%s %s' % (
remote.name.replace('/', '_'),
' '.join('%s="%s"' % (k, options[k]) for k in sorted(options)),
)
)
template = """
[windows]
%s
# support winrm binary module tests (temporary solution)
[testhost:children]
windows
"""
template = textwrap.dedent(template)
inventory = template % ('\n'.join(hosts))
return inventory
def command_integration_filter(args, # type: TIntegrationConfig
targets, # type: t.Iterable[TIntegrationTarget]
init_callback=None, # type: t.Callable[[TIntegrationConfig, t.Tuple[TIntegrationTarget, ...]], None]
): # type: (...) -> t.Tuple[TIntegrationTarget, ...]
"""Filter the given integration test targets."""
targets = tuple(target for target in targets if 'hidden/' not in target.aliases)
changes = get_changes_filter(args)
# special behavior when the --changed-all-target target is selected based on changes
if args.changed_all_target in changes:
# act as though the --changed-all-target target was in the include list
if args.changed_all_mode == 'include' and args.changed_all_target not in args.include:
args.include.append(args.changed_all_target)
args.delegate_args += ['--include', args.changed_all_target]
# act as though the --changed-all-target target was in the exclude list
elif args.changed_all_mode == 'exclude' and args.changed_all_target not in args.exclude:
args.exclude.append(args.changed_all_target)
require = args.require + changes
exclude = args.exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
environment_exclude = get_integration_filter(args, internal_targets)
environment_exclude += cloud_filter(args, internal_targets)
if environment_exclude:
exclude += environment_exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
if not internal_targets:
raise AllTargetsSkipped()
if args.start_at and not any(target.name == args.start_at for target in internal_targets):
raise ApplicationError('Start at target matches nothing: %s' % args.start_at)
if init_callback:
init_callback(args, internal_targets)
cloud_init(args, internal_targets)
vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path)
if os.path.exists(vars_file_src):
def integration_config_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None
"""
Add the integration config vars file to the payload file list.
This will preserve the file during delegation even if the file is ignored by source control.
"""
files.append((vars_file_src, data_context().content.integration_vars_path))
data_context().register_payload_callback(integration_config_callback)
if args.delegate:
raise Delegate(require=require, exclude=exclude, integration_targets=internal_targets)
install_command_requirements(args)
return internal_targets
def command_integration_filtered(args, targets, all_targets, inventory_path, pre_target=None, post_target=None,
remote_temp_path=None):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:type all_targets: tuple[IntegrationTarget]
:type inventory_path: str
:type pre_target: (IntegrationTarget) -> None | None
:type post_target: (IntegrationTarget) -> None | None
:type remote_temp_path: str | None
"""
found = False
passed = []
failed = []
targets_iter = iter(targets)
all_targets_dict = dict((target.name, target) for target in all_targets)
setup_errors = []
setup_targets_executed = set()
for target in all_targets:
for setup_target in target.setup_once + target.setup_always:
if setup_target not in all_targets_dict:
setup_errors.append('Target "%s" contains invalid setup target: %s' % (target.name, setup_target))
if setup_errors:
raise ApplicationError('Found %d invalid setup aliases:\n%s' % (len(setup_errors), '\n'.join(setup_errors)))
check_pyyaml(args, args.python_version)
test_dir = os.path.join(ResultType.TMP.path, 'output_dir')
if not args.explain and any('needs/ssh/' in target.aliases for target in targets):
max_tries = 20
display.info('SSH service required for tests. Checking to make sure we can connect.')
for i in range(1, max_tries + 1):
try:
run_command(args, ['ssh', '-o', 'BatchMode=yes', 'localhost', 'id'], capture=True)
display.info('SSH service responded.')
break
except SubprocessError:
if i == max_tries:
raise
seconds = 3
display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds)
time.sleep(seconds)
# Windows is different as Ansible execution is done locally but the host is remote
if args.inject_httptester and not isinstance(args, WindowsIntegrationConfig):
inject_httptester(args)
start_at_task = args.start_at_task
results = {}
current_environment = None # type: t.Optional[EnvironmentDescription]
# common temporary directory path that will be valid on both the controller and the remote
# it must be common because it will be referenced in environment variables that are shared across multiple hosts
common_temp_path = '/tmp/ansible-test-%s' % ''.join(random.choice(string.ascii_letters + string.digits) for _idx in range(8))
setup_common_temp_dir(args, common_temp_path)
try:
for target in targets_iter:
if args.start_at and not found:
found = target.name == args.start_at
if not found:
continue
if args.list_targets:
print(target.name)
continue
tries = 2 if args.retry_on_error else 1
verbosity = args.verbosity
cloud_environment = get_cloud_environment(args, target)
original_environment = current_environment if current_environment else EnvironmentDescription(args)
current_environment = None
display.info('>>> Environment Description\n%s' % original_environment, verbosity=3)
try:
while tries:
tries -= 1
try:
if cloud_environment:
cloud_environment.setup_once()
run_setup_targets(args, test_dir, target.setup_once, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, False)
start_time = time.time()
run_setup_targets(args, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, True)
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if pre_target:
pre_target(target)
try:
if target.script_path:
command_integration_script(args, target, test_dir, inventory_path, common_temp_path,
remote_temp_path=remote_temp_path)
else:
command_integration_role(args, target, start_at_task, test_dir, inventory_path,
common_temp_path, remote_temp_path=remote_temp_path)
start_at_task = None
finally:
if post_target:
post_target(target)
end_time = time.time()
results[target.name] = dict(
name=target.name,
type=target.type,
aliases=target.aliases,
modules=target.modules,
run_time_seconds=int(end_time - start_time),
setup_once=target.setup_once,
setup_always=target.setup_always,
coverage=args.coverage,
coverage_label=args.coverage_label,
python_version=args.python_version,
)
break
except SubprocessError:
if cloud_environment:
cloud_environment.on_failure(target, tries)
if not original_environment.validate(target.name, throw=False):
raise
if not tries:
raise
display.warning('Retrying test target "%s" with maximum verbosity.' % target.name)
display.verbosity = args.verbosity = 6
start_time = time.time()
current_environment = EnvironmentDescription(args)
end_time = time.time()
EnvironmentDescription.check(original_environment, current_environment, target.name, throw=True)
results[target.name]['validation_seconds'] = int(end_time - start_time)
passed.append(target)
except Exception as ex:
failed.append(target)
if args.continue_on_error:
display.error(ex)
continue
display.notice('To resume at this test target, use the option: --start-at %s' % target.name)
next_target = next(targets_iter, None)
if next_target:
display.notice('To resume after this test target, use the option: --start-at %s' % next_target.name)
raise
finally:
display.verbosity = args.verbosity = verbosity
finally:
if not args.explain:
if args.coverage:
coverage_temp_path = os.path.join(common_temp_path, ResultType.COVERAGE.name)
coverage_save_path = ResultType.COVERAGE.path
for filename in os.listdir(coverage_temp_path):
shutil.copy(os.path.join(coverage_temp_path, filename), os.path.join(coverage_save_path, filename))
remove_tree(common_temp_path)
result_name = '%s-%s.json' % (
args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0))))
data = dict(
targets=results,
)
write_json_test_results(ResultType.DATA, result_name, data)
if failed:
raise ApplicationError('The %d integration test(s) listed below (out of %d) failed. See error output above for details:\n%s' % (
len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed)))
def start_httptester(args):
"""
:type args: EnvironmentConfig
:rtype: str, list[str]
"""
# map ports from remote -> localhost -> container
# passing through localhost is only used when ansible-test is not already running inside a docker container
ports = [
dict(
remote=8080,
container=80,
),
dict(
remote=8088,
container=88,
),
dict(
remote=8443,
container=443,
),
dict(
remote=8749,
container=749,
),
]
container_id = get_docker_container_id()
if not container_id:
for item in ports:
item['localhost'] = get_available_port()
docker_pull(args, args.httptester)
httptester_id = run_httptester(args, dict((port['localhost'], port['container']) for port in ports if 'localhost' in port))
if container_id:
container_host = get_docker_container_ip(args, httptester_id)
display.info('Found httptester container address: %s' % container_host, verbosity=1)
else:
container_host = get_docker_hostname()
ssh_options = []
for port in ports:
ssh_options += ['-R', '%d:%s:%d' % (port['remote'], container_host, port.get('localhost', port['container']))]
return httptester_id, ssh_options
def run_httptester(args, ports=None):
"""
:type args: EnvironmentConfig
:type ports: dict[int, int] | None
:rtype: str
"""
options = [
'--detach',
'--env', 'KRB5_PASSWORD=%s' % args.httptester_krb5_password,
]
if ports:
for localhost_port, container_port in ports.items():
options += ['-p', '%d:%d' % (localhost_port, container_port)]
network = get_docker_preferred_network_name(args)
if is_docker_user_defined_network(network):
# network-scoped aliases are only supported for containers in user defined networks
for alias in HTTPTESTER_HOSTS:
options.extend(['--network-alias', alias])
httptester_id = docker_run(args, args.httptester, options=options)[0]
if args.explain:
httptester_id = 'httptester_id'
else:
httptester_id = httptester_id.strip()
return httptester_id
def inject_httptester(args):
"""
:type args: CommonConfig
"""
comment = ' # ansible-test httptester\n'
append_lines = ['127.0.0.1 %s%s' % (host, comment) for host in HTTPTESTER_HOSTS]
hosts_path = '/etc/hosts'
original_lines = read_text_file(hosts_path).splitlines(True)
if not any(line.endswith(comment) for line in original_lines):
write_text_file(hosts_path, ''.join(original_lines + append_lines))
# determine which forwarding mechanism to use
pfctl = find_executable('pfctl', required=False)
iptables = find_executable('iptables', required=False)
if pfctl:
kldload = find_executable('kldload', required=False)
if kldload:
try:
run_command(args, ['kldload', 'pf'], capture=True)
except SubprocessError:
pass # already loaded
rules = '''
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
rdr pass inet proto tcp from any to any port 88 -> 127.0.0.1 port 8088
rdr pass inet proto tcp from any to any port 443 -> 127.0.0.1 port 8443
rdr pass inet proto tcp from any to any port 749 -> 127.0.0.1 port 8749
'''
cmd = ['pfctl', '-ef', '-']
try:
run_command(args, cmd, capture=True, data=rules)
except SubprocessError:
pass # non-zero exit status on success
elif iptables:
ports = [
(80, 8080),
(88, 8088),
(443, 8443),
(749, 8749),
]
for src, dst in ports:
rule = ['-o', 'lo', '-p', 'tcp', '--dport', str(src), '-j', 'REDIRECT', '--to-port', str(dst)]
try:
# check for existing rule
cmd = ['iptables', '-t', 'nat', '-C', 'OUTPUT'] + rule
run_command(args, cmd, capture=True)
except SubprocessError:
# append rule when it does not exist
cmd = ['iptables', '-t', 'nat', '-A', 'OUTPUT'] + rule
run_command(args, cmd, capture=True)
else:
raise ApplicationError('No supported port forwarding mechanism detected.')
def run_setup_targets(args, test_dir, target_names, targets_dict, targets_executed, inventory_path, temp_path, always):
"""
:type args: IntegrationConfig
:type test_dir: str
:type target_names: list[str]
:type targets_dict: dict[str, IntegrationTarget]
:type targets_executed: set[str]
:type inventory_path: str
:type temp_path: str
:type always: bool
"""
for target_name in target_names:
if not always and target_name in targets_executed:
continue
target = targets_dict[target_name]
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if target.script_path:
command_integration_script(args, target, test_dir, inventory_path, temp_path)
else:
command_integration_role(args, target, None, test_dir, inventory_path, temp_path)
targets_executed.add(target_name)
def integration_environment(args, target, test_dir, inventory_path, ansible_config, env_config):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type test_dir: str
:type inventory_path: str
:type ansible_config: str | None
:type env_config: CloudEnvironmentConfig | None
:rtype: dict[str, str]
"""
env = ansible_environment(args, ansible_config=ansible_config)
if args.inject_httptester:
env.update(dict(
HTTPTESTER='1',
KRB5_PASSWORD=args.httptester_krb5_password,
))
callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else [])
integration = dict(
JUNIT_OUTPUT_DIR=ResultType.JUNIT.path,
ANSIBLE_CALLBACKS_ENABLED=','.join(sorted(set(callback_plugins))),
ANSIBLE_TEST_CI=args.metadata.ci_provider or get_ci_provider().code,
ANSIBLE_TEST_COVERAGE='check' if args.coverage_check else ('yes' if args.coverage else ''),
OUTPUT_DIR=test_dir,
INVENTORY_PATH=os.path.abspath(inventory_path),
)
if args.debug_strategy:
env.update(dict(ANSIBLE_STRATEGY='debug'))
if 'non_local/' in target.aliases:
if args.coverage:
display.warning('Skipping coverage reporting on Ansible modules for non-local test: %s' % target.name)
env.update(dict(ANSIBLE_TEST_REMOTE_INTERPRETER=''))
env.update(integration)
return env
def command_integration_script(args, target, test_dir, inventory_path, temp_path, remote_temp_path=None):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type test_dir: str
:type inventory_path: str
:type temp_path: str
:type remote_temp_path: str | None
"""
display.info('Running %s integration test script' % target.name)
env_config = None
if isinstance(args, PosixIntegrationConfig):
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
env_config = cloud_environment.get_environment_config()
with integration_test_environment(args, target, inventory_path) as test_env:
cmd = ['./%s' % os.path.basename(target.script_path)]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config)
cwd = os.path.join(test_env.targets_dir, target.relative_path)
env.update(dict(
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
ANSIBLE_PLAYBOOK_DIR=cwd,
))
if env_config and env_config.env_vars:
env.update(env_config.env_vars)
with integration_test_config_file(args, env_config, test_env.integration_dir) as config_path:
if config_path:
cmd += ['-e', '@%s' % config_path]
module_coverage = 'non_local/' not in target.aliases
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path,
remote_temp_path=remote_temp_path, module_coverage=module_coverage)
def command_integration_role(args, target, start_at_task, test_dir, inventory_path, temp_path, remote_temp_path=None):
"""
:type args: IntegrationConfig
:type target: IntegrationTarget
:type start_at_task: str | None
:type test_dir: str
:type inventory_path: str
:type temp_path: str
:type remote_temp_path: str | None
"""
display.info('Running %s integration test role' % target.name)
env_config = None
vars_files = []
variables = dict(
output_dir=test_dir,
)
if isinstance(args, WindowsIntegrationConfig):
hosts = 'windows'
gather_facts = False
variables.update(dict(
win_output_dir=r'C:\ansible_testing',
))
elif isinstance(args, NetworkIntegrationConfig):
hosts = target.network_platform
gather_facts = False
else:
hosts = 'testhost'
gather_facts = True
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
env_config = cloud_environment.get_environment_config()
with integration_test_environment(args, target, inventory_path) as test_env:
if os.path.exists(test_env.vars_file):
vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir))
play = dict(
hosts=hosts,
gather_facts=gather_facts,
vars_files=vars_files,
vars=variables,
roles=[
target.name,
],
)
if env_config:
if env_config.ansible_vars:
variables.update(env_config.ansible_vars)
play.update(dict(
environment=env_config.env_vars,
module_defaults=env_config.module_defaults,
))
playbook = json.dumps([play], indent=4, sort_keys=True)
with named_temporary_file(args=args, directory=test_env.integration_dir, prefix='%s-' % target.name, suffix='.yml', content=playbook) as playbook_path:
filename = os.path.basename(playbook_path)
display.info('>>> Playbook: %s\n%s' % (filename, playbook.strip()), verbosity=3)
cmd = ['ansible-playbook', filename, '-i', os.path.relpath(test_env.inventory_path, test_env.integration_dir)]
if start_at_task:
cmd += ['--start-at-task', start_at_task]
if args.tags:
cmd += ['--tags', args.tags]
if args.skip_tags:
cmd += ['--skip-tags', args.skip_tags]
if args.diff:
cmd += ['--diff']
if isinstance(args, NetworkIntegrationConfig):
if args.testcase:
cmd += ['-e', 'testcase=%s' % args.testcase]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config)
cwd = test_env.integration_dir
env.update(dict(
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
ANSIBLE_PLAYBOOK_DIR=cwd,
))
env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir
module_coverage = 'non_local/' not in target.aliases
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path,
remote_temp_path=remote_temp_path, module_coverage=module_coverage)
def get_changes_filter(args):
"""
:type args: TestConfig
:rtype: list[str]
"""
paths = detect_changes(args)
if not args.metadata.change_description:
if paths:
changes = categorize_changes(args, paths, args.command)
else:
changes = ChangeDescription()
args.metadata.change_description = changes
if paths is None:
return [] # change detection not enabled, do not filter targets
if not paths:
raise NoChangesDetected()
if args.metadata.change_description.targets is None:
raise NoTestsForChanges()
return args.metadata.change_description.targets
def detect_changes(args):
"""
:type args: TestConfig
:rtype: list[str] | None
"""
if args.changed:
paths = get_ci_provider().detect_changes(args)
elif args.changed_from or args.changed_path:
paths = args.changed_path or []
if args.changed_from:
paths += read_text_file(args.changed_from).splitlines()
else:
return None # change detection not enabled
if paths is None:
return None # act as though change detection not enabled, do not filter targets
display.info('Detected changes in %d file(s).' % len(paths))
for path in paths:
display.info(path, verbosity=1)
return paths
def get_integration_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
if args.docker:
return get_integration_docker_filter(args, targets)
if args.remote:
return get_integration_remote_filter(args, targets)
return get_integration_local_filter(args, targets)
def common_integration_filter(args, targets, exclude):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:type exclude: list[str]
"""
override_disabled = set(target for target in args.include if target.startswith('disabled/'))
if not args.allow_disabled:
skip = 'disabled/'
override = [target.name for target in targets if override_disabled & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-disabled or prefixing with "disabled/": %s'
% (skip.rstrip('/'), ', '.join(skipped)))
override_unsupported = set(target for target in args.include if target.startswith('unsupported/'))
if not args.allow_unsupported:
skip = 'unsupported/'
override = [target.name for target in targets if override_unsupported & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-unsupported or prefixing with "unsupported/": %s'
% (skip.rstrip('/'), ', '.join(skipped)))
override_unstable = set(target for target in args.include if target.startswith('unstable/'))
if args.allow_unstable_changed:
override_unstable |= set(args.metadata.change_description.focused_targets or [])
if not args.allow_unstable:
skip = 'unstable/'
override = [target.name for target in targets if override_unstable & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-unstable or prefixing with "unstable/": %s'
% (skip.rstrip('/'), ', '.join(skipped)))
# only skip a Windows test if using --windows and all the --windows versions are defined in the aliases as skip/windows/%s
if isinstance(args, WindowsIntegrationConfig) and args.windows:
all_skipped = []
not_skipped = []
for target in targets:
if "skip/windows/" not in target.aliases:
continue
skip_valid = []
skip_missing = []
for version in args.windows:
if "skip/windows/%s/" % version in target.aliases:
skip_valid.append(version)
else:
skip_missing.append(version)
if skip_missing and skip_valid:
not_skipped.append((target.name, skip_valid, skip_missing))
elif skip_valid:
all_skipped.append(target.name)
if all_skipped:
exclude.extend(all_skipped)
skip_aliases = ["skip/windows/%s/" % w for w in args.windows]
display.warning('Excluding tests marked "%s" which are set to skip with --windows %s: %s'
% ('", "'.join(skip_aliases), ', '.join(args.windows), ', '.join(all_skipped)))
if not_skipped:
for target, skip_valid, skip_missing in not_skipped:
# warn when failing to skip due to lack of support for skipping only some versions
display.warning('Including test "%s" which was marked to skip for --windows %s but not %s.'
% (target, ', '.join(skip_valid), ', '.join(skip_missing)))
def get_integration_local_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
exclude = []
common_integration_filter(args, targets, exclude)
if not args.allow_root and os.getuid() != 0:
skip = 'needs/root/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require --allow-root or running as root: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
override_destructive = set(target for target in args.include if target.startswith('destructive/'))
if not args.allow_destructive:
skip = 'destructive/'
override = [target.name for target in targets if override_destructive & set(target.aliases)]
skipped = [target.name for target in targets if skip in target.aliases and target.name not in override]
if skipped:
exclude.extend(skipped)
display.warning('Excluding tests marked "%s" which require --allow-destructive or prefixing with "destructive/" to run locally: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
exclude_targets_by_python_version(targets, args.python_version, exclude)
return exclude
def get_integration_docker_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
exclude = []
common_integration_filter(args, targets, exclude)
skip = 'skip/docker/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which cannot run under docker: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
if not args.docker_privileged:
skip = 'needs/privileged/'
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require --docker-privileged to run under docker: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
python_version = get_python_version(args, get_docker_completion(), args.docker_raw)
exclude_targets_by_python_version(targets, python_version, exclude)
return exclude
def get_integration_remote_filter(args, targets):
"""
:type args: IntegrationConfig
:type targets: tuple[IntegrationTarget]
:rtype: list[str]
"""
remote = args.parsed_remote
exclude = []
common_integration_filter(args, targets, exclude)
skips = {
'skip/%s' % remote.platform: remote.platform,
'skip/%s/%s' % (remote.platform, remote.version): '%s %s' % (remote.platform, remote.version),
'skip/%s%s' % (remote.platform, remote.version): '%s %s' % (remote.platform, remote.version), # legacy syntax, use above format
}
if remote.arch:
skips.update({
'skip/%s/%s' % (remote.arch, remote.platform): '%s on %s' % (remote.platform, remote.arch),
'skip/%s/%s/%s' % (remote.arch, remote.platform, remote.version): '%s %s on %s' % (remote.platform, remote.version, remote.arch),
})
for skip, description in skips.items():
skipped = [target.name for target in targets if skip in target.skips]
if skipped:
exclude.append(skip + '/')
display.warning('Excluding tests marked "%s" which are not supported on %s: %s' % (skip, description, ', '.join(skipped)))
python_version = get_python_version(args, get_remote_completion(), args.remote)
exclude_targets_by_python_version(targets, python_version, exclude)
return exclude
def exclude_targets_by_python_version(targets, python_version, exclude):
"""
:type targets: tuple[IntegrationTarget]
:type python_version: str
:type exclude: list[str]
"""
if not python_version:
display.warning('Python version unknown. Unable to skip tests based on Python version.')
return
python_major_version = python_version.split('.')[0]
skip = 'skip/python%s/' % python_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %s: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
skip = 'skip/python%s/' % python_major_version
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which are not supported on python %s: %s'
% (skip.rstrip('/'), python_version, ', '.join(skipped)))
def get_python_version(args, configs, name):
"""
:type args: EnvironmentConfig
:type configs: dict[str, dict[str, str]]
:type name: str
"""
config = configs.get(name, {})
config_python = config.get('python')
if not config or not config_python:
if args.python:
return args.python
display.warning('No Python version specified. '
'Use completion config or the --python option to specify one.', unique=True)
return '' # failure to provide a version may result in failures or reduced functionality later
supported_python_versions = config_python.split(',')
default_python_version = supported_python_versions[0]
if args.python and args.python not in supported_python_versions:
raise ApplicationError('Python %s is not supported by %s. Supported Python version(s) are: %s' % (
args.python, name, ', '.join(sorted(supported_python_versions))))
python_version = args.python or default_python_version
return python_version
def get_python_interpreter(args, configs, name):
"""
:type args: EnvironmentConfig
:type configs: dict[str, dict[str, str]]
:type name: str
"""
if args.python_interpreter:
return args.python_interpreter
config = configs.get(name, {})
if not config:
if args.python:
guess = 'python%s' % args.python
else:
guess = 'python'
display.warning('Using "%s" as the Python interpreter. '
'Use completion config or the --python-interpreter option to specify the path.' % guess, unique=True)
return guess
python_version = get_python_version(args, configs, name)
python_dir = config.get('python_dir', '/usr/bin')
python_interpreter = os.path.join(python_dir, 'python%s' % python_version)
python_interpreter = config.get('python%s' % python_version, python_interpreter)
return python_interpreter
class EnvironmentDescription:
"""Description of current running environment."""
def __init__(self, args):
"""Initialize snapshot of environment configuration.
:type args: IntegrationConfig
"""
self.args = args
if self.args.explain:
self.data = {}
return
warnings = []
versions = ['']
versions += SUPPORTED_PYTHON_VERSIONS
versions += list(set(v.split('.')[0] for v in SUPPORTED_PYTHON_VERSIONS))
version_check = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'versions.py')
python_paths = dict((v, find_executable('python%s' % v, required=False)) for v in sorted(versions))
pip_paths = dict((v, find_executable('pip%s' % v, required=False)) for v in sorted(versions))
program_versions = dict((v, self.get_version([python_paths[v], version_check], warnings)) for v in sorted(python_paths) if python_paths[v])
pip_interpreters = dict((v, self.get_shebang(pip_paths[v])) for v in sorted(pip_paths) if pip_paths[v])
known_hosts_hash = self.get_hash(os.path.expanduser('~/.ssh/known_hosts'))
for version in sorted(versions):
self.check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings)
for warning in warnings:
display.warning(warning, unique=True)
self.data = dict(
python_paths=python_paths,
pip_paths=pip_paths,
program_versions=program_versions,
pip_interpreters=pip_interpreters,
known_hosts_hash=known_hosts_hash,
warnings=warnings,
)
@staticmethod
def check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings):
"""
:type version: str
:param python_paths: dict[str, str]
:param pip_paths: dict[str, str]
:param pip_interpreters: dict[str, str]
:param warnings: list[str]
"""
python_label = 'Python%s' % (' %s' % version if version else '')
pip_path = pip_paths.get(version)
python_path = python_paths.get(version)
if not python_path and not pip_path:
# neither python or pip is present for this version
return
if not python_path:
warnings.append('A %s interpreter was not found, yet a matching pip was found at "%s".' % (python_label, pip_path))
return
if not pip_path:
warnings.append('A %s interpreter was found at "%s", yet a matching pip was not found.' % (python_label, python_path))
return
pip_shebang = pip_interpreters.get(version)
match = re.search(r'#!\s*(?P<command>[^\s]+)', pip_shebang)
if not match:
warnings.append('A %s pip was found at "%s", but it does not have a valid shebang: %s' % (python_label, pip_path, pip_shebang))
return
pip_interpreter = os.path.realpath(match.group('command'))
python_interpreter = os.path.realpath(python_path)
if pip_interpreter == python_interpreter:
return
try:
identical = filecmp.cmp(pip_interpreter, python_interpreter)
except OSError:
identical = False
if identical:
return
warnings.append('A %s pip was found at "%s", but it uses interpreter "%s" instead of "%s".' % (
python_label, pip_path, pip_interpreter, python_interpreter))
def __str__(self):
"""
:rtype: str
"""
return json.dumps(self.data, sort_keys=True, indent=4)
def validate(self, target_name, throw):
"""
:type target_name: str
:type throw: bool
:rtype: bool
"""
current = EnvironmentDescription(self.args)
return self.check(self, current, target_name, throw)
@staticmethod
def check(original, current, target_name, throw):
"""
:type original: EnvironmentDescription
:type current: EnvironmentDescription
:type target_name: str
:type throw: bool
:rtype: bool
"""
original_json = str(original)
current_json = str(current)
if original_json == current_json:
return True
unified_diff = '\n'.join(difflib.unified_diff(
a=original_json.splitlines(),
b=current_json.splitlines(),
fromfile='original.json',
tofile='current.json',
lineterm='',
))
message = ('Test target "%s" has changed the test environment!\n'
'If these changes are necessary, they must be reverted before the test finishes.\n'
'>>> Original Environment\n'
'%s\n'
'>>> Current Environment\n'
'%s\n'
'>>> Environment Diff\n'
'%s'
% (target_name, original_json, current_json, unified_diff))
if throw:
raise ApplicationError(message)
display.error(message)
return False
@staticmethod
def get_version(command, warnings):
"""
:type command: list[str]
:type warnings: list[text]
:rtype: list[str]
"""
try:
stdout, stderr = raw_command(command, capture=True, cmd_verbosity=2)
except SubprocessError as ex:
warnings.append(u'%s' % ex)
return None # all failures are equal, we don't care why it failed, only that it did
return [line.strip() for line in ((stdout or '').strip() + (stderr or '').strip()).splitlines()]
@staticmethod
def get_shebang(path):
"""
:type path: str
:rtype: str
"""
with open_text_file(path) as script_fd:
return script_fd.readline().strip()
@staticmethod
def get_hash(path):
"""
:type path: str
:rtype: str | None
"""
if not os.path.exists(path):
return None
file_hash = hashlib.sha256()
file_hash.update(read_binary_file(path))
return file_hash.hexdigest()
class NoChangesDetected(ApplicationWarning):
"""Exception when change detection was performed, but no changes were found."""
def __init__(self):
super(NoChangesDetected, self).__init__('No changes detected.')
class NoTestsForChanges(ApplicationWarning):
"""Exception when changes detected, but no tests trigger as a result."""
def __init__(self):
super(NoTestsForChanges, self).__init__('No tests found for detected changes.')
class Delegate(Exception):
"""Trigger command delegation."""
def __init__(self, exclude=None, require=None, integration_targets=None):
"""
:type exclude: list[str] | None
:type require: list[str] | None
:type integration_targets: tuple[IntegrationTarget] | None
"""
super(Delegate, self).__init__()
self.exclude = exclude or []
self.require = require or []
self.integration_targets = integration_targets or tuple()
class AllTargetsSkipped(ApplicationWarning):
"""All targets skipped."""
def __init__(self):
super(AllTargetsSkipped, self).__init__('All targets skipped.')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,900 |
ansible-inventory fails, ansible_base distribution not found
|
##### SUMMARY
Trying to do `ansible-inventory` with an inventory plugin that doesn't appear to do anything wrong results in traceback saying that it can't find the ansible_base distribution.
(I am aware that the python package name for Ansible itself changed `ansible` -> `ansible-base` -> `ansible-core`)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
```paste below
(my_linode) [alancoding@alan-red-hat test]$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
```
##### CONFIGURATION
Defaults
##### OS / ENVIRONMENT
Fedora 33
##### STEPS TO REPRODUCE
I have also replicated in the https://quay.io/repository/ansible/ansible-runner image, this is my replication as confirmed on my local machine.
The folder `~/repos/test-playbooks` is cloned from https://github.com/ansible/test-playbooks/
The requirements file is designed to be sure that I pick up all recent bug fixes in the collection, contents of `req.yml`
```yaml
collections:
- name: https://github.com/ansible-collections/community.general.git
type: git
version: main
```
Reproduction steps:
```
mkdir test
cd tes
python3 -m venv my_linode
source my_linode/bin/activate
pip3 install linode_api4
pip3 install --no-cache-dir https://github.com/ansible/ansible/archive/devel.tar.gz
ansible-galaxy collection install -r req.yml
ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
```
##### EXPECTED RESULTS
I expect to get an error from the linode library along the lines of "You are not authenticated because you did not provide a token"
##### ACTUAL RESULTS
```
(my_linode) [alancoding@alan-red-hat test]$ ansible-inventory -i ~/repos/test-playbooks/inventories/linode_fqcn.linode.yml --list -vvv
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable
at any point.
ansible-inventory 2.11.0.dev0
config file = None
configured module search path = ['/home/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible
ansible collection location = /home/alancoding/.ansible/collections:/usr/share/ansible/collections
executable location = /home/alancoding/test/my_linode/bin/ansible-inventory
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
jinja version = 2.11.2
libyaml = False
No config file found; using defaults
host_list declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
script declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
toml declined parsing /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with auto plugin: The
'ansible_base' distribution was not found and is required by the application
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 50, in parse
plugin = inventory_loader.get(plugin_name)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 779, in get
return self.get_with_context(name, *args, **kwargs).object
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 799, in get_with_context
self._module_cache[path] = self._load_module_source(name, path)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/loader.py", line 763, in _load_module_source
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/alancoding/.ansible/collections/ansible_collections/community/general/plugins/inventory/linode.py", line 64, in <module>
from linode_api4 import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/__init__.py", line 3, in <module>
from linode_api4.linode_client import LinodeClient
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/linode_api4/linode_client.py", line 6, in <module>
import pkg_resources
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3257, in <module>
def _initialize_master_working_set():
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3240, in _call_aside
f(*args, **kwargs)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 3269, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 582, in _build_master
ws.require(__requires__)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 899, in require
needed = self.resolve(parse_requirements(requirements))
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/pkg_resources/__init__.py", line 785, in resolve
raise DistributionNotFound(req, requirers)
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with yaml plugin: Plugin
configuration YAML file, not YAML inventory
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/yaml.py", line 112, in parse
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
[WARNING]: * Failed to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml with ini plugin: Invalid
host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port.
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/inventory/manager.py", line 290, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/home/alancoding/test/my_linode/lib64/python3.9/site-packages/ansible/plugins/inventory/ini.py", line 136, in parse
raise AnsibleParserError(e)
[WARNING]: Unable to parse /home/alancoding/repos/test-playbooks/inventories/linode_fqcn.linode.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
```
I could not find any existing issues along the lines of "ansible_base distribution was not found ", which seems pretty distinctive.
|
https://github.com/ansible/ansible/issues/72900
|
https://github.com/ansible/ansible/pull/72906
|
57c2cc7c7748fb2a315f7e436c84c1fc0f1a03c8
|
6bc1e9f5dd98ec4e700015ee91c08f4ce82831fe
| 2020-12-08T15:05:04Z |
python
| 2020-12-08T18:22:55Z |
test/lib/ansible_test/_internal/provider/source/unversioned.py
|
"""Fallback source provider when no other provider matches the content root."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ... import types as t
from ...constants import (
TIMEOUT_PATH,
)
from ...encoding import (
to_bytes,
)
from . import (
SourceProvider,
)
class UnversionedSource(SourceProvider):
"""Fallback source provider when no other provider matches the content root."""
sequence = 0 # disable automatic detection
@staticmethod
def is_content_root(path): # type: (str) -> bool
"""Return True if the given path is a content root for this provider."""
return False
def get_paths(self, path): # type: (str) -> t.List[str]
"""Return the list of available content paths under the given path."""
paths = []
kill_any_dir = (
'.idea',
'.pytest_cache',
'__pycache__',
'ansible.egg-info',
'ansible_base.egg-info',
)
kill_sub_dir = {
'test': (
'results',
'cache',
'output',
),
'tests': (
'output',
),
'docs/docsite': (
'_build',
),
}
kill_sub_file = {
'': (
TIMEOUT_PATH,
),
}
kill_extensions = (
'.pyc',
'.pyo',
'.retry',
)
for root, dir_names, file_names in os.walk(path):
rel_root = os.path.relpath(root, path)
if rel_root == '.':
rel_root = ''
for kill in kill_any_dir + kill_sub_dir.get(rel_root, ()):
if kill in dir_names:
dir_names.remove(kill)
kill_files = kill_sub_file.get(rel_root, ())
paths.extend([os.path.join(rel_root, file_name) for file_name in file_names
if not os.path.splitext(file_name)[1] in kill_extensions and file_name not in kill_files])
# include directory symlinks since they will not be traversed and would otherwise go undetected
paths.extend([os.path.join(rel_root, dir_name) + os.path.sep for dir_name in dir_names if os.path.islink(to_bytes(dir_name))])
return paths
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,147 |
`ansible-galaxy collection list` vs site-packages
|
##### SUMMARY
`ansible-base collection list` doesn't show the packages installed by the new `ansible` package
As a workaround, you need to do:
```
COLLECTION_INSTALL=$(python -c 'import ansible, os.path ; print("%s/../ansible_collections" % os.path.dirname(ansible.__file__))')
ansible-galaxy collection list -p "$COLLECTION_INSTALL"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```paste below
2.10 beta1
```
##### CONFIGURATION
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Install Ansible 2.10.0a1
https://pypi.python.org/packages/source/a/ansible/ansible-2.10.0a1.tar.gz
2. `ansible-galaxy collection list`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/70147
|
https://github.com/ansible/ansible/pull/70173
|
0a60e5e341f9a781c27462e047ebe0f73129f4a1
|
e7dee73774b0b436551e4993ba917eec1e03af2d
| 2020-06-18T18:36:33Z |
python
| 2020-12-10T23:59:33Z |
changelogs/fragments/collection-list-site-packages.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,147 |
`ansible-galaxy collection list` vs site-packages
|
##### SUMMARY
`ansible-base collection list` doesn't show the packages installed by the new `ansible` package
As a workaround, you need to do:
```
COLLECTION_INSTALL=$(python -c 'import ansible, os.path ; print("%s/../ansible_collections" % os.path.dirname(ansible.__file__))')
ansible-galaxy collection list -p "$COLLECTION_INSTALL"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```paste below
2.10 beta1
```
##### CONFIGURATION
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Install Ansible 2.10.0a1
https://pypi.python.org/packages/source/a/ansible/ansible-2.10.0a1.tar.gz
2. `ansible-galaxy collection list`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/70147
|
https://github.com/ansible/ansible/pull/70173
|
0a60e5e341f9a781c27462e047ebe0f73129f4a1
|
e7dee73774b0b436551e4993ba917eec1e03af2d
| 2020-06-18T18:36:33Z |
python
| 2020-12-10T23:59:33Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import sys
import textwrap
import time
import yaml
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
CollectionRequirement,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections
)
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection),
version=collection.latest_version,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if is_iterable(collections):
fqcn_set = set(to_text(c) for c in collections)
version_set = set(to_text(c.latest_version) for c in collections)
else:
fqcn_set = set([to_text(collections)])
version_set = set([collections.latest_version])
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self._api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collection-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=C.COLLECTIONS_PATHS, action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The collection(s) name or '
'path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False),
('auth_url', False), ('v3', False)]
validate_certs = not context.CLIARGS['ignore_certs']
galaxy_options = {'validate_certs': validate_certs}
for optional_key in ['clear_response_cache', 'no_cache']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_key in server_list:
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
available_api_versions = None
v3 = server_options.pop('v3', None)
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
**galaxy_options))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
**galaxy_options))
context.CLIARGS['func']()
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections') or []:
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_type = collection_req.get('type')
if req_type not in ('file', 'galaxy', 'git', 'url', None):
raise AnsibleError("The collection requirement entry key 'type' must be one of file, galaxy, git, or url.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy,
"explicit_requirement_%s" % req_name,
req_source,
validate_certs=not context.CLIARGS['ignore_certs']))
requirements['collections'].append((req_name, req_version, req_source, req_type))
else:
requirements['collections'].append((collection_req, '*', None, None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(self, collections, requirements_file):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)
else:
requirements = {'collections': [], 'roles': []}
for collection_input in collections:
requirement = None
if os.path.isfile(to_bytes(collection_input, errors='surrogate_or_strict')) or \
urlparse(collection_input).scheme.lower() in ['http', 'https'] or \
collection_input.startswith(('git+', 'git@')):
# Arg is a file path or URL to a collection
name = collection_input
else:
name, dummy, requirement = collection_input.partition(':')
requirements['collections'].append((name, requirement or '*', None, None))
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_download(self):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
ignore_certs = context.CLIARGS['ignore_certs']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(collections, requirements_file)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(requirements, download_path, self.api_servers, (not ignore_certs), no_deps,
context.CLIARGS['allow_pre_release'])
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
def execute_verify(self):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
requirements = self._require_one_of_collections_requirements(collections, requirements_file)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
verify_collections(requirements, resolved_paths, self.api_servers, (not ignore_certs), ignore_errors,
allow_pre_release=True)
return 0
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(install_items, requirements_file)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
requirements = self._parse_requirements_file(requirements_file)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
galaxy_args = self._raw_args
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(collection_requirements, collection_path)
def _execute_install_collection(self, requirements, path):
force = context.CLIARGS['force']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
allow_pre_release = context.CLIARGS['allow_pre_release'] if 'allow_pre_release' in context.CLIARGS else False
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_with_deps, allow_pre_release=allow_pre_release)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
def execute_list_collection(self):
"""
List all collections installed on the local system
"""
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = C.config.get_configuration_definition('COLLECTIONS_PATHS').get('default')
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
collection = CollectionRequirement.from_path(b_collection_path, False, fallback_metadata=True)
fqcn_width, version_width = _get_collection_widths(collection)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = find_existing_collections(collection_path, fallback_metadata=True)
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
collections.sort(key=to_text)
for collection in collections:
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,147 |
`ansible-galaxy collection list` vs site-packages
|
##### SUMMARY
`ansible-base collection list` doesn't show the packages installed by the new `ansible` package
As a workaround, you need to do:
```
COLLECTION_INSTALL=$(python -c 'import ansible, os.path ; print("%s/../ansible_collections" % os.path.dirname(ansible.__file__))')
ansible-galaxy collection list -p "$COLLECTION_INSTALL"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```paste below
2.10 beta1
```
##### CONFIGURATION
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Install Ansible 2.10.0a1
https://pypi.python.org/packages/source/a/ansible/ansible-2.10.0a1.tar.gz
2. `ansible-galaxy collection list`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/70147
|
https://github.com/ansible/ansible/pull/70173
|
0a60e5e341f9a781c27462e047ebe0f73129f4a1
|
e7dee73774b0b436551e4993ba917eec1e03af2d
| 2020-06-18T18:36:33Z |
python
| 2020-12-10T23:59:33Z |
test/integration/targets/ansible-galaxy/runme.sh
|
#!/usr/bin/env bash
set -eux -o pipefail
ansible-playbook setup.yml "$@"
trap 'ansible-playbook ${ANSIBLE_PLAYBOOK_DIR}/cleanup.yml' EXIT
# Very simple version test
ansible-galaxy --version
# Need a relative custom roles path for testing various scenarios of -p
galaxy_relative_rolespath="my/custom/roles/path"
# Status message function (f_ to designate that it's a function)
f_ansible_galaxy_status()
{
printf "\n\n\n### Testing ansible-galaxy: %s\n" "${@}"
}
# Use to initialize a repository. Must call the post function too.
f_ansible_galaxy_create_role_repo_pre()
{
repo_name=$1
repo_dir=$2
pushd "${repo_dir}"
ansible-galaxy init "${repo_name}"
pushd "${repo_name}"
git init .
# Prep git, because it doesn't work inside a docker container without it
git config user.email "[email protected]"
git config user.name "Ansible Tester"
# f_ansible_galaxy_create_role_repo_post
}
# Call after f_ansible_galaxy_create_repo_pre.
f_ansible_galaxy_create_role_repo_post()
{
repo_name=$1
repo_tar=$2
# f_ansible_galaxy_create_role_repo_pre
git add .
git commit -m "local testing ansible galaxy role"
git archive \
--format=tar \
--prefix="${repo_name}/" \
master > "${repo_tar}"
popd # "${repo_name}"
popd # "${repo_dir}"
}
# Prep the local git repos with role and make a tar archive so we can test
# different things
galaxy_local_test_role="test-role"
galaxy_local_test_role_dir=$(mktemp -d)
galaxy_local_test_role_git_repo="${galaxy_local_test_role_dir}/${galaxy_local_test_role}"
galaxy_local_test_role_tar="${galaxy_local_test_role_dir}/${galaxy_local_test_role}.tar"
f_ansible_galaxy_create_role_repo_pre "${galaxy_local_test_role}" "${galaxy_local_test_role_dir}"
f_ansible_galaxy_create_role_repo_post "${galaxy_local_test_role}" "${galaxy_local_test_role_tar}"
galaxy_local_parent_role="parent-role"
galaxy_local_parent_role_dir=$(mktemp -d)
galaxy_local_parent_role_git_repo="${galaxy_local_parent_role_dir}/${galaxy_local_parent_role}"
galaxy_local_parent_role_tar="${galaxy_local_parent_role_dir}/${galaxy_local_parent_role}.tar"
# Create parent-role repository
f_ansible_galaxy_create_role_repo_pre "${galaxy_local_parent_role}" "${galaxy_local_parent_role_dir}"
cat <<EOF > meta/requirements.yml
- src: git+file:///${galaxy_local_test_role_git_repo}
EOF
f_ansible_galaxy_create_role_repo_post "${galaxy_local_parent_role}" "${galaxy_local_parent_role_tar}"
# Galaxy install test case
#
# Install local git repo
f_ansible_galaxy_status "install of local git repo"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_test_role}"
# Galaxy install test case
#
# Install local git repo and ensure that if a role_path is passed, it is in fact used
f_ansible_galaxy_status "install of local git repo with -p \$role_path"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
mkdir -p "${galaxy_relative_rolespath}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" -p "${galaxy_relative_rolespath}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${galaxy_relative_rolespath}/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy install test case
#
# Install local git repo with a meta/requirements.yml
f_ansible_galaxy_status "install of local git repo with meta/requirements.yml"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_parent_role_git_repo}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_parent_role}" ]]
# Test that the dependency was also installed
[[ -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_parent_role}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_test_role}"
# Galaxy install test case
#
# Install local git repo with a meta/requirements.yml + --no-deps argument
f_ansible_galaxy_status "install of local git repo with meta/requirements.yml + --no-deps argument"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_parent_role_git_repo}" --no-deps "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_parent_role}" ]]
# Test that the dependency was not installed
[[ ! -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${HOME}/.ansible/roles/${galaxy_local_test_role}"
# Galaxy install test case
#
# Ensure that if both a role_file and role_path is provided, they are both
# honored
#
# Protect against regression (GitHub Issue #35217)
# https://github.com/ansible/ansible/issues/35217
f_ansible_galaxy_status \
"install of local git repo and local tarball with -p \$role_path and -r \$role_file" \
"Protect against regression (Issue #35217)"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
git clone "${galaxy_local_test_role_git_repo}" "${galaxy_local_test_role}"
ansible-galaxy init roles-path-bug "$@"
pushd roles-path-bug
cat <<EOF > ansible.cfg
[defaults]
roles_path = ../:../../:../roles:roles/
EOF
cat <<EOF > requirements.yml
---
- src: ${galaxy_local_test_role_tar}
name: ${galaxy_local_test_role}
EOF
ansible-galaxy install -r requirements.yml -p roles/ "$@"
popd # roles-path-bug
# Test that the role was installed to the expected directory
[[ -d "${galaxy_testdir}/roles-path-bug/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy role list tests
#
# Basic tests to ensure listing roles works
f_ansible_galaxy_status "role list"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" "$@"
ansible-galaxy role list | tee out.txt
ansible-galaxy role list test-role | tee -a out.txt
[[ $(grep -c '^- test-role' out.txt ) -eq 2 ]]
popd # ${galaxy_testdir}
# Galaxy role test case
#
# Test listing a specific role that is not in the first path in ANSIBLE_ROLES_PATH.
# https://github.com/ansible/ansible/issues/60167#issuecomment-585460706
f_ansible_galaxy_status \
"list specific role not in the first path in ANSIBLE_ROLES_PATH"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
mkdir testroles
ansible-galaxy role init --init-path ./local-roles quark
ANSIBLE_ROLES_PATH=./local-roles:${HOME}/.ansible/roles ansible-galaxy role list quark | tee out.txt
[[ $(grep -c 'not found' out.txt) -eq 0 ]]
ANSIBLE_ROLES_PATH=${HOME}/.ansible/roles:./local-roles ansible-galaxy role list quark | tee out.txt
[[ $(grep -c 'not found' out.txt) -eq 0 ]]
popd # ${role_testdir}
rm -fr "${role_testdir}"
# Galaxy role info tests
# Get info about role that is not installed
f_ansible_galaxy_status "role info"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy role info samdoran.fish | tee out.txt
[[ $(grep -c 'not found' out.txt ) -eq 0 ]]
[[ $(grep -c 'Role:.*samdoran\.fish' out.txt ) -eq 1 ]]
popd # ${galaxy_testdir}
f_ansible_galaxy_status \
"role info non-existant role"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
ansible-galaxy role info notaroll | tee out.txt
grep -- '- the role notaroll was not found' out.txt
f_ansible_galaxy_status \
"role info description offline"
mkdir testroles
ansible-galaxy role init testdesc --init-path ./testroles
# Only galaxy_info['description'] exists in file
sed -i -e 's#[[:space:]]\{1,\}description:.*$# description: Description in galaxy_info#' ./testroles/testdesc/meta/main.yml
ansible-galaxy role info -p ./testroles --offline testdesc | tee out.txt
grep 'description: Description in galaxy_info' out.txt
# Both top level 'description' and galaxy_info['description'] exist in file
# Use shell-fu instead of sed to prepend a line to a file because BSD
# and macOS sed don't work the same as GNU sed.
echo 'description: Top level' | \
cat - ./testroles/testdesc/meta/main.yml > tmp.yml && \
mv tmp.yml ./testroles/testdesc/meta/main.yml
ansible-galaxy role info -p ./testroles --offline testdesc | tee out.txt
grep 'description: Top level' out.txt
# Only top level 'description' exists in file
sed -i.bak '/^[[:space:]]\{1,\}description: Description in galaxy_info/d' ./testroles/testdesc/meta/main.yml
ansible-galaxy role info -p ./testroles --offline testdesc | tee out.txt
grep 'description: Top level' out.txt
# test multiple role listing
ansible-galaxy role init otherrole --init-path ./testroles
ansible-galaxy role info -p ./testroles --offline testdesc otherrole | tee out.txt
grep 'Role: testdesc' out.txt
grep 'Role: otherrole' out.txt
popd # ${role_testdir}
rm -fr "${role_testdir}"
# Properly list roles when the role name is a subset of the path, or the role
# name is the same name as the parent directory of the role. Issue #67365
#
# ./parrot/parrot
# ./parrot/arr
# ./testing-roles/test
f_ansible_galaxy_status \
"list roles where the role name is the same or a subset of the role path (#67365)"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
mkdir parrot
ansible-galaxy role init --init-path ./parrot parrot
ansible-galaxy role init --init-path ./parrot parrot-ship
ansible-galaxy role init --init-path ./parrot arr
ansible-galaxy role list -p ./parrot | tee out.txt
[[ $(grep -Ec '\- (parrot|arr)' out.txt) -eq 3 ]]
ansible-galaxy role list test-role | tee -a out.txt
popd # ${role_testdir}
rm -rf "${role_testdir}"
f_ansible_galaxy_status \
"Test role with non-ascii characters"
role_testdir=$(mktemp -d)
pushd "${role_testdir}"
mkdir nonascii
ansible-galaxy role init --init-path ./nonascii nonascii
touch nonascii/ÅÑŚÌβŁÈ.txt
tar czvf nonascii.tar.gz nonascii
ansible-galaxy role install -p ./roles nonascii.tar.gz
popd # ${role_testdir}
rm -rf "${role_testdir}"
#################################
# ansible-galaxy collection tests
#################################
# TODO: Move these to ansible-galaxy-collection
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
## ansible-galaxy collection list tests
# Create more collections and put them in various places
f_ansible_galaxy_status \
"setting up for collection list tests"
rm -rf ansible_test/* install/*
NAMES=(zoo museum airport)
for n in "${NAMES[@]}"; do
ansible-galaxy collection init "ansible_test.$n"
ansible-galaxy collection build "ansible_test/$n"
done
ansible-galaxy collection install ansible_test-zoo-1.0.0.tar.gz
ansible-galaxy collection install ansible_test-museum-1.0.0.tar.gz -p ./install
ansible-galaxy collection install ansible_test-airport-1.0.0.tar.gz -p ./local
# Change the collection version and install to another location
sed -i -e 's#^version:.*#version: 2.5.0#' ansible_test/zoo/galaxy.yml
ansible-galaxy collection build ansible_test/zoo
ansible-galaxy collection install ansible_test-zoo-2.5.0.tar.gz -p ./local
# Test listing a collection that contains a galaxy.yml
ansible-galaxy collection init "ansible_test.development"
mv ./ansible_test/development "${galaxy_testdir}/local/ansible_collections/ansible_test/"
export ANSIBLE_COLLECTIONS_PATH=~/.ansible/collections:${galaxy_testdir}/local
f_ansible_galaxy_status \
"collection list all collections"
ansible-galaxy collection list -p ./install | tee out.txt
[[ $(grep -c ansible_test out.txt) -eq 5 ]]
f_ansible_galaxy_status \
"collection list specific collection"
ansible-galaxy collection list -p ./install ansible_test.airport | tee out.txt
[[ $(grep -c 'ansible_test\.airport' out.txt) -eq 1 ]]
f_ansible_galaxy_status \
"collection list specific collection which contains galaxy.yml"
ansible-galaxy collection list -p ./install ansible_test.development 2>&1 | tee out.txt
[[ $(grep -c 'ansible_test\.development' out.txt) -eq 1 ]]
[[ $(grep -c 'WARNING' out.txt) -eq 0 ]]
f_ansible_galaxy_status \
"collection list specific collection found in multiple places"
ansible-galaxy collection list -p ./install ansible_test.zoo | tee out.txt
[[ $(grep -c 'ansible_test\.zoo' out.txt) -eq 2 ]]
f_ansible_galaxy_status \
"collection list all with duplicate paths"
ansible-galaxy collection list -p ~/.ansible/collections | tee out.txt
[[ $(grep -c '# /root/.ansible/collections/ansible_collections' out.txt) -eq 1 ]]
f_ansible_galaxy_status \
"collection list invalid collection name"
ansible-galaxy collection list -p ./install dirty.wraughten.name "$@" 2>&1 | tee out.txt || echo "expected failure"
grep 'ERROR! Invalid collection name' out.txt
f_ansible_galaxy_status \
"collection list path not found"
ansible-galaxy collection list -p ./nope "$@" 2>&1 | tee out.txt || echo "expected failure"
grep '\[WARNING\]: - the configured path' out.txt
f_ansible_galaxy_status \
"collection list missing ansible_collections dir inside path"
mkdir emptydir
ansible-galaxy collection list -p ./emptydir "$@"
rmdir emptydir
unset ANSIBLE_COLLECTIONS_PATH
## end ansible-galaxy collection list
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${galaxy_local_test_role_dir}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,234 |
Yum and dnf module comparison operators not documented
|
##### Summary
Document " <, >, <=, => " operators in name parameter of yum/dnf module. Currently the documentation does not mention that comparison operators are available, but they do work.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
yum/dnf module
##### ADDITIONAL INFORMATION
The value operators work currently in the yum/dnf modules
```
yum :
name: "pkg >= 1"
state: present
```
Yum/dnf will determine the version available and install it if it matches the comparison operator
|
https://github.com/ansible/ansible/issues/61234
|
https://github.com/ansible/ansible/pull/72763
|
e7dee73774b0b436551e4993ba917eec1e03af2d
|
0044091a055dd9cd448f7639a65b7e8cc3dacbf1
| 2019-08-23T15:36:24Z |
python
| 2020-12-11T15:31:19Z |
changelogs/fragments/61234-yum-dnf-version-comp-doc.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,234 |
Yum and dnf module comparison operators not documented
|
##### Summary
Document " <, >, <=, => " operators in name parameter of yum/dnf module. Currently the documentation does not mention that comparison operators are available, but they do work.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
yum/dnf module
##### ADDITIONAL INFORMATION
The value operators work currently in the yum/dnf modules
```
yum :
name: "pkg >= 1"
state: present
```
Yum/dnf will determine the version available and install it if it matches the comparison operator
|
https://github.com/ansible/ansible/issues/61234
|
https://github.com/ansible/ansible/pull/72763
|
e7dee73774b0b436551e4993ba917eec1e03af2d
|
0044091a055dd9cd448f7639a65b7e8cc3dacbf1
| 2019-08-23T15:36:24Z |
python
| 2020-12-11T15:31:19Z |
lib/ansible/modules/dnf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
notes:
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
dnf:
name: httpd
state: latest
- name: Install the latest version of Apache and MariaDB
dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install the 'Development tools' package group
dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
rpm_arch_re = re.compile(r'(.*)\.(.*)')
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
rpm_arch_match = rpm_arch_re.match(packagename)
if rpm_arch_match:
nevr, arch = rpm_arch_match.groups()
if arch in redhat_rpm_arches:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
if not HAS_DNF:
if PY2:
package = 'python2-dnf'
else:
package = 'python3-dnf'
if self.module.check_mode:
self.module.fail_json(
msg="`{0}` is not installed, but it is required"
"for the Ansible dnf module.".format(package),
results=[],
)
rc, stdout, stderr = self.module.run_command(['dnf', 'install', '-y', package])
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
except ImportError:
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `{2}` package or ensure you have specified the "
"correct ansible_python_interpreter.".format(sys.executable, sys.version.replace('\n', ''),
package),
results=[],
cmd='dnf install -y {0}'.format(package),
rc=rc,
stdout=stdout,
stderr=stderr,
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
if installed.filter(name=pkg):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else: # Nothing to do, report back
pass
elif is_installed: # An potentially older (or same) version is installed
if upgrade:
self.base.upgrade(pkg_spec)
else: # Nothing to do, report back
pass
else: # The package is not installed, simply install it
self.base.install(pkg_spec)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
available = self.base.sack.query().available()
pkg_spec = available.filter(provides=filepath).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith("@") or ('/' in name):
# like "dnf install /usr/bin/vi"
if '/' in name:
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occured attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = list(map(str, installed.filter(name=pkg_spec).run()))
if installed_pkg:
candidate_pkg = self._packagename_dict(installed_pkg[0])
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
else:
candidate_pkg = self._packagename_dict(pkg_spec)
installed_pkg = installed.filter(nevra=pkg_spec).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 0:
self.base.remove(pkg_spec)
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}'.format(package)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
self.base.do_transaction()
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.exit_json(**response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
@staticmethod
def has_dnf():
return HAS_DNF
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,234 |
Yum and dnf module comparison operators not documented
|
##### Summary
Document " <, >, <=, => " operators in name parameter of yum/dnf module. Currently the documentation does not mention that comparison operators are available, but they do work.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
yum/dnf module
##### ADDITIONAL INFORMATION
The value operators work currently in the yum/dnf modules
```
yum :
name: "pkg >= 1"
state: present
```
Yum/dnf will determine the version available and install it if it matches the comparison operator
|
https://github.com/ansible/ansible/issues/61234
|
https://github.com/ansible/ansible/pull/72763
|
e7dee73774b0b436551e4993ba917eec1e03af2d
|
0044091a055dd9cd448f7639a65b7e8cc3dacbf1
| 2019-08-23T15:36:24Z |
python
| 2020-12-11T15:31:19Z |
lib/ansible/modules/yum.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# Copyright: (c) 2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: yum
version_added: historical
short_description: Manages packages with the I(yum) package manager
description:
- Installs, upgrade, downgrades, removes, and lists packages and groups with the I(yum) package manager.
- This module only works on Python 2. If you require Python 3 support see the M(ansible.builtin.dnf) module.
options:
use_backend:
description:
- This module supports C(yum) (as it always has), this is known as C(yum3)/C(YUM3)/C(yum-deprecated) by
upstream yum developers. As of Ansible 2.7+, this module also supports C(YUM4), which is the
"new yum" and it has an C(dnf) backend.
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, yum, yum4, dnf ]
type: str
version_added: "2.7"
name:
description:
- A package name or package specifier with version, like C(name-1.0).
- If a previous version is specified, the task also needs to turn C(allow_downgrade) on.
See the C(allow_downgrade) documentation for caveats with downgrading packages.
- When using state=latest, this can be C('*') which means run C(yum -y update).
- You can also pass a url or a local path to a rpm file (using state=present).
To operate on several packages this can accept a comma separated string of packages or (as of 2.0) a list of packages.
aliases: [ pkg ]
type: list
elements: str
exclude:
description:
- Package name(s) to exclude when state=present, or latest
type: list
elements: str
version_added: "2.0"
list:
description:
- "Package name to run the equivalent of yum list --show-duplicates <package> against. In addition to listing packages,
use can also list the following: C(installed), C(updates), C(available) and C(repos)."
- This parameter is mutually exclusive with C(name).
type: str
state:
description:
- Whether to install (C(present) or C(installed), C(latest)), or remove (C(absent) or C(removed)) a package.
- C(present) and C(installed) will simply ensure that a desired package is installed.
- C(latest) will update the specified package if it's not of the latest available version.
- C(absent) and C(removed) will remove the specified package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
type: str
choices: [ absent, installed, latest, present, removed ]
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
type: list
elements: str
version_added: "0.9"
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
type: list
elements: str
version_added: "0.9"
conf_file:
description:
- The remote yum configuration file to use for the transaction.
type: str
version_added: "0.6"
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
version_added: "1.2"
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.3"
update_cache:
description:
- Force yum to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "1.9"
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
- Prior to 2.1 the code worked as if this was set to C(yes).
type: bool
default: "yes"
version_added: "2.1"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.5"
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
default: "/"
type: str
version_added: "2.3"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
type: bool
default: "no"
version_added: "2.4"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
default: "no"
type: bool
version_added: "2.6"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.4"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
type: list
elements: str
version_added: "2.5"
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
type: list
elements: str
version_added: "2.5"
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
type: str
version_added: "2.7"
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
- "NOTE: This feature requires yum >= 3.4.3 (RHEL/CentOS 7+)"
type: bool
default: "no"
version_added: "2.7"
disable_excludes:
description:
- Disable the excludes defined in YUM config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in yum.conf.
- If set to C(repoid), disable excludes defined for given repo id.
type: str
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the yum lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
- "NOTE: This feature requires yum >= 4 (RHEL/CentOS 8+)"
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
install_repoquery:
description:
- If repoquery is not available, install yum-utils. If the system is
registered to RHN or an RHN Satellite, repoquery allows for querying
all channels assigned to the system. It is also required to use the
'list' parameter.
- "NOTE: This will run and be logged as a separate yum transation which
takes place before any other installation or removal."
- "NOTE: This will use the system's default enabled repositories without
regard for disablerepo/enablerepo given to the module."
required: false
version_added: "1.5"
default: "yes"
type: bool
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
- In versions prior to 1.9.2 this module installed and removed each package
given to the yum module separately. This caused problems when packages
specified by filename or url had to be installed or removed together. In
1.9.2 this was fixed so that packages are installed in one yum
transaction. However, if one of the packages adds a new yum repository
that the other packages come from (such as epel-release) then that package
needs to be installed in a separate task. This mimics yum's command line
behaviour.
- 'Yum itself has two types of groups. "Package groups" are specified in the
rpm itself while "environment groups" are specified in a separate file
(usually by the distribution). Unfortunately, this division becomes
apparent to ansible users because ansible needs to operate on the group
of packages in a single transaction and yum requires groups to be specified
in different ways when used in that way. Package groups are specified as
"@development-tools" and environment groups are "@^gnome-desktop-environment".
Use the "yum group list hidden ids" command to see which category of group the group
you want to install falls into.'
- 'The yum module does not support clearing yum cache in an idempotent way, so it
was decided not to implement it, the only method is to use command and call the yum
command directly, namely "command: yum clean all"
https://github.com/ansible/ansible/pull/31450#issuecomment-352889579'
# informational: requirements for nodes
requirements:
- yum
author:
- Ansible Core Team
- Seth Vidal (@skvidal)
- Eduard Snesarev (@verm666)
- Berend De Schouwer (@berenddeschouwer)
- Abhijeet Kasurde (@Akasurde)
- Adam Miller (@maxamillion)
'''
EXAMPLES = '''
- name: Install the latest version of Apache
yum:
name: httpd
state: latest
- name: Install a list of packages (suitable replacement for 2.11 loop deprecation warning)
yum:
name:
- nginx
- postgresql
- postgresql-server
state: present
- name: Install a list of packages with a list variable
yum:
name: "{{ packages }}"
vars:
packages:
- httpd
- httpd-tools
- name: Remove the Apache package
yum:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
yum:
name: httpd
enablerepo: testing
state: present
- name: Install one specific version of Apache
yum:
name: httpd-2.2.29-1.4.amzn1
state: present
- name: Upgrade all packages
yum:
name: '*'
state: latest
- name: Upgrade all packages, excluding kernel & foo related packages
yum:
name: '*'
state: latest
exclude: kernel*,foo*
- name: Install the nginx rpm from a remote repo
yum:
name: http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install nginx rpm from a local file
yum:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install the 'Development tools' package group
yum:
name: "@Development tools"
state: present
- name: Install the 'Gnome desktop' environment group
yum:
name: "@^gnome-desktop-environment"
state: present
- name: List ansible packages and register result to print with debug later
yum:
list: ansible
register: result
- name: Install package with multiple repos enabled
yum:
name: sos
enablerepo: "epel,ol7_latest"
- name: Install package with multiple repos disabled
yum:
name: sos
disablerepo: "epel,ol7_latest"
- name: Download the nginx package but do not install it
yum:
name:
- nginx
state: latest
download_only: true
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
import errno
import os
import re
import tempfile
try:
import rpm
HAS_RPM_PYTHON = True
except ImportError:
HAS_RPM_PYTHON = False
try:
import yum
HAS_YUM_PYTHON = True
except ImportError:
HAS_YUM_PYTHON = False
try:
from yum.misc import find_unfinished_transactions, find_ts_remaining
from rpmUtils.miscutils import splitFilename, compareEVR
transaction_helpers = True
except ImportError:
transaction_helpers = False
from contextlib import contextmanager
from ansible.module_utils.urls import fetch_file
def_qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}"
rpmbin = None
class YumModule(YumDnf):
"""
Yum Ansible module back-end implementation
"""
def __init__(self, module):
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# This populates instance vars for all argument spec params
super(YumModule, self).__init__(module)
self.pkg_mgr_name = "yum"
self.lockfile = '/var/run/yum.pid'
self._yum_base = None
def _enablerepos_with_error_checking(self):
# NOTE: This seems unintuitive, but it mirrors yum's CLI behavior
if len(self.enablerepo) == 1:
try:
self.yum_base.repos.enableRepo(self.enablerepo[0])
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.fail_json(msg="Repository %s not found." % self.enablerepo[0])
else:
raise e
else:
for rid in self.enablerepo:
try:
self.yum_base.repos.enableRepo(rid)
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.warn("Repository %s not found." % rid)
else:
raise e
def is_lockfile_pid_valid(self):
try:
try:
with open(self.lockfile, 'r') as f:
oldpid = int(f.readline())
except ValueError:
# invalid data
os.unlink(self.lockfile)
return False
if oldpid == os.getpid():
# that's us?
os.unlink(self.lockfile)
return False
try:
with open("/proc/%d/stat" % oldpid, 'r') as f:
stat = f.readline()
if stat.split()[2] == 'Z':
# Zombie
os.unlink(self.lockfile)
return False
except IOError:
# either /proc is not mounted or the process is already dead
try:
# check the state of the process
os.kill(oldpid, 0)
except OSError as e:
if e.errno == errno.ESRCH:
# No such process
os.unlink(self.lockfile)
return False
self.module.fail_json(msg="Unable to check PID %s in %s: %s" % (oldpid, self.lockfile, to_native(e)))
except (IOError, OSError) as e:
# lockfile disappeared?
return False
# another copy seems to be running
return True
@property
def yum_base(self):
if self._yum_base:
return self._yum_base
else:
# Only init once
self._yum_base = yum.YumBase()
self._yum_base.preconf.debuglevel = 0
self._yum_base.preconf.errorlevel = 0
self._yum_base.preconf.plugins = True
self._yum_base.preconf.enabled_plugins = self.enable_plugin
self._yum_base.preconf.disabled_plugins = self.disable_plugin
if self.releasever:
self._yum_base.preconf.releasever = self.releasever
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
self._yum_base.preconf.root = self.installroot
self._yum_base.conf.installroot = self.installroot
if self.conf_file and os.path.exists(self.conf_file):
self._yum_base.preconf.fn = self.conf_file
if os.geteuid() != 0:
if hasattr(self._yum_base, 'setCacheDir'):
self._yum_base.setCacheDir()
else:
cachedir = yum.misc.getCacheDir()
self._yum_base.repos.setCacheDir(cachedir)
self._yum_base.conf.cache = 0
if self.disable_excludes:
self._yum_base.conf.disable_excludes = self.disable_excludes
# A sideeffect of accessing conf is that the configuration is
# loaded and plugins are discovered
self.yum_base.conf
try:
for rid in self.disablerepo:
self.yum_base.repos.disableRepo(rid)
self._enablerepos_with_error_checking()
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return self._yum_base
def po_to_envra(self, po):
if hasattr(po, 'ui_envra'):
return po.ui_envra
return '%s:%s-%s-%s.%s' % (po.epoch, po.name, po.version, po.release, po.arch)
def is_group_env_installed(self, name):
name_lower = name.lower()
if yum.__version_info__ >= (3, 4):
groups_list = self.yum_base.doGroupLists(return_evgrps=True)
else:
groups_list = self.yum_base.doGroupLists()
# list of the installed groups on the first index
groups = groups_list[0]
for group in groups:
if name_lower.endswith(group.name.lower()) or name_lower.endswith(group.groupid.lower()):
return True
if yum.__version_info__ >= (3, 4):
# list of the installed env_groups on the third index
envs = groups_list[2]
for env in envs:
if name_lower.endswith(env.name.lower()) or name_lower.endswith(env.environmentid.lower()):
return True
return False
def is_installed(self, repoq, pkgspec, qf=None, is_pkg=False):
if qf is None:
qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}\n"
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.rpmdb.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs and not is_pkg:
pkgs.extend(self.yum_base.returnInstalledPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
global rpmbin
if not rpmbin:
rpmbin = self.module.get_bin_path('rpm', required=True)
cmd = [rpmbin, '-q', '--qf', qf, pkgspec]
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
# rpm localizes messages and we're screen scraping so make sure we use
# the C locale
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc != 0 and 'is not installed' not in out:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err))
if 'is not installed' in out:
out = ''
pkgs = [p for p in out.replace('(none)', '0').split('\n') if p.strip()]
if not pkgs and not is_pkg:
cmd = [rpmbin, '-q', '--qf', qf, '--whatprovides', pkgspec]
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
rc2, out2, err2 = self.module.run_command(cmd, environ_update=lang_env)
else:
rc2, out2, err2 = (0, '', '')
if rc2 != 0 and 'no package provides' not in out2:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err + err2))
if 'no package provides' in out2:
out2 = ''
pkgs += [p for p in out2.replace('(none)', '0').split('\n') if p.strip()]
return pkgs
return []
def is_available(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs:
pkgs.extend(self.yum_base.returnPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return [p for p in out.split('\n') if p.strip()]
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return []
def is_update(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
updates = []
try:
pkgs = self.yum_base.returnPackagesByDep(pkgspec) + \
self.yum_base.returnInstalledPackagesByDep(pkgspec)
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
updates = self.yum_base.doPackageLists(pkgnarrow='updates').updates
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
retpkgs = (pkg for pkg in pkgs if pkg in updates)
return set(self.po_to_envra(p) for p in retpkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--pkgnarrow=updates", "--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return set()
def what_provides(self, repoq, req_spec, qf=def_qf):
if not repoq:
pkgs = []
try:
try:
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
except Exception as e:
# If a repo with `repo_gpgcheck=1` is added and the repo GPG
# key was never accepted, querying this repo will throw an
# error: 'repomd.xml signature could not be verified'. In that
# situation we need to run `yum -y makecache` which will accept
# the key and try again.
if 'repomd.xml signature could not be verified' in to_native(e):
if self.releasever:
self.module.run_command(self.yum_basecmd + ['makecache'] + ['--releasever=%s' % self.releasever])
else:
self.module.run_command(self.yum_basecmd + ['makecache'])
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
else:
raise
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([req_spec])
pkgs.extend(e)
pkgs.extend(m)
e, m, _ = self.yum_base.rpmdb.matchPackageNames([req_spec])
pkgs.extend(e)
pkgs.extend(m)
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return set(self.po_to_envra(p) for p in pkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, "--whatprovides", req_spec]
rc, out, err = self.module.run_command(cmd)
cmd = myrepoq + ["--qf", qf, req_spec]
rc2, out2, err2 = self.module.run_command(cmd)
if rc == 0 and rc2 == 0:
out += out2
pkgs = set([p for p in out.split('\n') if p.strip()])
if not pkgs:
pkgs = self.is_installed(repoq, req_spec, qf=qf)
return pkgs
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err + err2))
return set()
def transaction_exists(self, pkglist):
"""
checks the package list to see if any packages are
involved in an incomplete transaction
"""
conflicts = []
if not transaction_helpers:
return conflicts
# first, we create a list of the package 'nvreas'
# so we can compare the pieces later more easily
pkglist_nvreas = (splitFilename(pkg) for pkg in pkglist)
# next, we build the list of packages that are
# contained within an unfinished transaction
unfinished_transactions = find_unfinished_transactions()
for trans in unfinished_transactions:
steps = find_ts_remaining(trans)
for step in steps:
# the action is install/erase/etc., but we only
# care about the package spec contained in the step
(action, step_spec) = step
(n, v, r, e, a) = splitFilename(step_spec)
# and see if that spec is in the list of packages
# requested for installation/updating
for pkg in pkglist_nvreas:
# if the name and arch match, we're going to assume
# this package is part of a pending transaction
# the label is just for display purposes
label = "%s-%s" % (n, a)
if n == pkg[0] and a == pkg[4]:
if label not in conflicts:
conflicts.append("%s-%s" % (n, a))
break
return conflicts
def local_envra(self, path):
"""return envra of a local rpm passed in"""
ts = rpm.TransactionSet()
ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
fd = os.open(path, os.O_RDONLY)
try:
header = ts.hdrFromFdno(fd)
except rpm.error as e:
return None
finally:
os.close(fd)
return '%s:%s-%s-%s.%s' % (
header[rpm.RPMTAG_EPOCH] or '0',
header[rpm.RPMTAG_NAME],
header[rpm.RPMTAG_VERSION],
header[rpm.RPMTAG_RELEASE],
header[rpm.RPMTAG_ARCH]
)
@contextmanager
def set_env_proxy(self):
# setting system proxy environment and saving old, if exists
namepass = ""
scheme = ["http", "https"]
old_proxy_env = [os.getenv("http_proxy"), os.getenv("https_proxy")]
try:
# "_none_" is a special value to disable proxy in yum.conf/*.repo
if self.yum_base.conf.proxy and self.yum_base.conf.proxy not in ("_none_",):
if self.yum_base.conf.proxy_username:
namepass = namepass + self.yum_base.conf.proxy_username
proxy_url = self.yum_base.conf.proxy
if self.yum_base.conf.proxy_password:
namepass = namepass + ":" + self.yum_base.conf.proxy_password
elif '@' in self.yum_base.conf.proxy:
namepass = self.yum_base.conf.proxy.split('@')[0].split('//')[-1]
proxy_url = self.yum_base.conf.proxy.replace("{0}@".format(namepass), "")
if namepass:
namepass = namepass + '@'
for item in scheme:
os.environ[item + "_proxy"] = re.sub(
r"(http://)",
r"\g<1>" + namepass, proxy_url
)
else:
for item in scheme:
os.environ[item + "_proxy"] = self.yum_base.conf.proxy
yield
except yum.Errors.YumBaseError:
raise
finally:
# revert back to previously system configuration
for item in scheme:
if os.getenv("{0}_proxy".format(item)):
del os.environ["{0}_proxy".format(item)]
if old_proxy_env[0]:
os.environ["http_proxy"] = old_proxy_env[0]
if old_proxy_env[1]:
os.environ["https_proxy"] = old_proxy_env[1]
def pkg_to_dict(self, pkgstr):
if pkgstr.strip() and pkgstr.count('|') == 5:
n, e, v, r, a, repo = pkgstr.split('|')
else:
return {'error_parsing': pkgstr}
d = {
'name': n,
'arch': a,
'epoch': e,
'release': r,
'version': v,
'repo': repo,
'envra': '%s:%s-%s-%s.%s' % (e, n, v, r, a)
}
if repo == 'installed':
d['yumstate'] = 'installed'
else:
d['yumstate'] = 'available'
return d
def repolist(self, repoq, qf="%{repoid}"):
cmd = repoq + ["--qf", qf, "-a"]
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
rc, out, _ = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
return []
def list_stuff(self, repoquerybin, stuff):
qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|%{repoid}"
# is_installed goes through rpm instead of repoquery so it needs a slightly different format
is_installed_qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|installed\n"
repoq = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.disablerepo:
repoq.extend(['--disablerepo', ','.join(self.disablerepo)])
if self.enablerepo:
repoq.extend(['--enablerepo', ','.join(self.enablerepo)])
if self.installroot != '/':
repoq.extend(['--installroot', self.installroot])
if self.conf_file and os.path.exists(self.conf_file):
repoq += ['-c', self.conf_file]
if stuff == 'installed':
return [self.pkg_to_dict(p) for p in sorted(self.is_installed(repoq, '-a', qf=is_installed_qf)) if p.strip()]
if stuff == 'updates':
return [self.pkg_to_dict(p) for p in sorted(self.is_update(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'available':
return [self.pkg_to_dict(p) for p in sorted(self.is_available(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'repos':
return [dict(repoid=name, state='enabled') for name in sorted(self.repolist(repoq)) if name.strip()]
return [
self.pkg_to_dict(p) for p in
sorted(self.is_installed(repoq, stuff, qf=is_installed_qf) + self.is_available(repoq, stuff, qf=qf))
if p.strip()
]
def exec_install(self, items, action, pkgs, res):
cmd = self.yum_basecmd + [action] + pkgs
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(installed=pkgs))
else:
res['changes'] = dict(installed=pkgs)
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc == 1:
for spec in items:
# Fail on invalid urls:
if ('://' in spec and ('No package %s available.' % spec in out or 'Cannot open: %s. Skipping.' % spec in err)):
err = 'Package at %s could not be installed' % spec
self.module.fail_json(changed=False, msg=err, rc=rc)
res['rc'] = rc
res['results'].append(out)
res['msg'] += err
res['changed'] = True
if ('Nothing to do' in out and rc == 0) or ('does not have any packages' in err):
res['changed'] = False
if rc != 0:
res['changed'] = False
self.module.fail_json(**res)
# Fail if yum prints 'No space left on device' because that means some
# packages failed executing their post install scripts because of lack of
# free space (e.g. kernel package couldn't generate initramfs). Note that
# yum can still exit with rc=0 even if some post scripts didn't execute
# correctly.
if 'No space left on device' in (out or err):
res['changed'] = False
res['msg'] = 'No space left on device'
self.module.fail_json(**res)
# FIXME - if we did an install - go and check the rpmdb to see if it actually installed
# look for each pkg in rpmdb
# look for each pkg via obsoletes
return res
def install(self, items, repoq):
pkgs = []
downgrade_pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['rc'] = 0
res['changed'] = False
for spec in items:
pkg = None
downgrade_candidate = False
# check if pkgspec is installed (if possible for idempotence)
if spec.endswith('.rpm') or '://' in spec:
if '://' not in spec and not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
if '://' in spec:
with self.set_env_proxy():
package = fetch_file(self.module, spec)
if not package.endswith('.rpm'):
# yum requires a local file to have the extension of .rpm and we
# can not guarantee that from an URL (redirects, proxies, etc)
new_package_path = '%s.rpm' % package
os.rename(package, new_package_path)
package = new_package_path
else:
package = spec
# most common case is the pkg is already installed
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get nevra information from RPM package: %s" % spec)
installed_pkgs = self.is_installed(repoq, envra)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], package))
continue
(name, ver, rel, epoch, arch) = splitFilename(envra)
installed_pkgs = self.is_installed(repoq, name)
# case for two same envr but different archs like x86_64 and i686
if len(installed_pkgs) == 2:
(cur_name0, cur_ver0, cur_rel0, cur_epoch0, cur_arch0) = splitFilename(installed_pkgs[0])
(cur_name1, cur_ver1, cur_rel1, cur_epoch1, cur_arch1) = splitFilename(installed_pkgs[1])
cur_epoch0 = cur_epoch0 or '0'
cur_epoch1 = cur_epoch1 or '0'
compare = compareEVR((cur_epoch0, cur_ver0, cur_rel0), (cur_epoch1, cur_ver1, cur_rel1))
if compare == 0 and cur_arch0 != cur_arch1:
for installed_pkg in installed_pkgs:
if installed_pkg.endswith(arch):
installed_pkgs = [installed_pkg]
if len(installed_pkgs) == 1:
installed_pkg = installed_pkgs[0]
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(installed_pkg)
cur_epoch = cur_epoch or '0'
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
# compare > 0 -> higher version is installed
# compare == 0 -> exact version is installed
# compare < 0 -> lower version is installed
if compare > 0 and self.allow_downgrade:
downgrade_candidate = True
elif compare >= 0:
continue
# else: if there are more installed packages with the same name, that would mean
# kernel, gpg-pubkey or like, so just let yum deal with it and try to install it
pkg = package
# groups
elif spec.startswith('@'):
if self.is_group_env_installed(spec):
continue
pkg = spec
# range requires or file-requires or pkgname :(
else:
# most common case is the pkg is already installed and done
# short circuit all the bs - and search for it as a pkg in is_installed
# if you find it then we're done
if not set(['*', '?']).intersection(set(spec)):
installed_pkgs = self.is_installed(repoq, spec, is_pkg=True)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], spec))
continue
# look up what pkgs provide this
pkglist = self.what_provides(repoq, spec)
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['rc'] = 125 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of them are installed
# then nothing to do
found = False
for this in pkglist:
if self.is_installed(repoq, this, is_pkg=True):
found = True
res['results'].append('%s providing %s is already installed' % (this, spec))
break
# if the version of the pkg you have installed is not in ANY repo, but there are
# other versions in the repos (both higher and lower) then the previous checks won't work.
# so we check one more time. This really only works for pkgname - not for file provides or virt provides
# but virt provides should be all caught in what_provides on its own.
# highly irritating
if not found:
if self.is_installed(repoq, spec):
found = True
res['results'].append('package providing %s is already installed' % (spec))
if found:
continue
# Downgrade - The yum install command will only install or upgrade to a spec version, it will
# not install an older version of an RPM even if specified by the install spec. So we need to
# determine if this is a downgrade, and then use the yum downgrade command to install the RPM.
if self.allow_downgrade:
for package in pkglist:
# Get the NEVRA of the requested package using pkglist instead of spec because pkglist
# contains consistently-formatted package names returned by yum, rather than user input
# that is often not parsed correctly by splitFilename().
(name, ver, rel, epoch, arch) = splitFilename(package)
# Check if any version of the requested package is installed
inst_pkgs = self.is_installed(repoq, name, is_pkg=True)
if inst_pkgs:
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(inst_pkgs[0])
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
if compare > 0:
downgrade_candidate = True
else:
downgrade_candidate = False
break
# If package needs to be installed/upgraded/downgraded, then pass in the spec
# we could get here if nothing provides it but that's not
# the error we're catching here
pkg = spec
if downgrade_candidate and self.allow_downgrade:
downgrade_pkgs.append(pkg)
else:
pkgs.append(pkg)
if downgrade_pkgs:
res = self.exec_install(items, 'downgrade', downgrade_pkgs, res)
if pkgs:
res = self.exec_install(items, 'install', pkgs, res)
return res
def remove(self, items, repoq):
pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
for pkg in items:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg)
if installed:
pkgs.append(pkg)
else:
res['results'].append('%s is not installed' % pkg)
if pkgs:
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(removed=pkgs))
else:
res['changes'] = dict(removed=pkgs)
# run an actual yum transaction
if self.autoremove:
cmd = self.yum_basecmd + ["autoremove"] + pkgs
else:
cmd = self.yum_basecmd + ["remove"] + pkgs
rc, out, err = self.module.run_command(cmd)
res['rc'] = rc
res['results'].append(out)
res['msg'] = err
if rc != 0:
if self.autoremove and 'No such command' in out:
self.module.fail_json(msg='Version of YUM too old for autoremove: Requires yum 3.4.3 (RHEL/CentOS 7+)')
else:
self.module.fail_json(**res)
# compile the results into one batch. If anything is changed
# then mark changed
# at the end - if we've end up failed then fail out of the rest
# of the process
# at this point we check to see if the pkg is no longer present
self._yum_base = None # previous YumBase package index is now invalid
for pkg in pkgs:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg, is_pkg=True)
if installed:
# Return a message so it's obvious to the user why yum failed
# and which package couldn't be removed. More details:
# https://github.com/ansible/ansible/issues/35672
res['msg'] = "Package '%s' couldn't be removed!" % pkg
self.module.fail_json(**res)
res['changed'] = True
return res
def run_check_update(self):
# run check-update to see if we have packages pending
if self.releasever:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'] + ['--releasever=%s' % self.releasever])
else:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'])
return rc, out, err
@staticmethod
def parse_check_update(check_update_output):
updates = {}
obsoletes = {}
# remove incorrect new lines in longer columns in output from yum check-update
# yum line wrapping can move the repo to the next line
#
# Meant to filter out sets of lines like:
# some_looooooooooooooooooooooooooooooooooooong_package_name 1:1.2.3-1.el7
# some-repo-label
#
# But it also needs to avoid catching lines like:
# Loading mirror speeds from cached hostfile
#
# ceph.x86_64 1:11.2.0-0.el7 ceph
# preprocess string and filter out empty lines so the regex below works
out = re.sub(r'\n[^\w]\W+(.*)', r' \1', check_update_output)
available_updates = out.split('\n')
# build update dictionary
for line in available_updates:
line = line.split()
# ignore irrelevant lines
# '*' in line matches lines like mirror lists:
# * base: mirror.corbina.net
# len(line) != 3 or 6 could be junk or a continuation
# len(line) = 6 is package obsoletes
#
# FIXME: what is the '.' not in line conditional for?
if '*' in line or len(line) not in [3, 6] or '.' not in line[0]:
continue
pkg, version, repo = line[0], line[1], line[2]
name, dist = pkg.rsplit('.', 1)
updates.update({name: {'version': version, 'dist': dist, 'repo': repo}})
if len(line) == 6:
obsolete_pkg, obsolete_version, obsolete_repo = line[3], line[4], line[5]
obsolete_name, obsolete_dist = obsolete_pkg.rsplit('.', 1)
obsoletes.update({obsolete_name: {'version': obsolete_version, 'dist': obsolete_dist, 'repo': obsolete_repo}})
return updates, obsoletes
def latest(self, items, repoq):
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
pkgs = {}
pkgs['update'] = []
pkgs['install'] = []
updates = {}
obsoletes = {}
update_all = False
cmd = None
# determine if we're doing an update all
if '*' in items:
update_all = True
rc, out, err = self.run_check_update()
if rc == 0 and update_all:
res['results'].append('Nothing to do here, all packages are up to date')
return res
elif rc == 100:
updates, obsoletes = self.parse_check_update(out)
elif rc == 1:
res['msg'] = err
res['rc'] = rc
self.module.fail_json(**res)
if update_all:
cmd = self.yum_basecmd + ['update']
will_update = set(updates.keys())
will_update_from_other_package = dict()
else:
will_update = set()
will_update_from_other_package = dict()
for spec in items:
# some guess work involved with groups. update @<group> will install the group if missing
if spec.startswith('@'):
pkgs['update'].append(spec)
will_update.add(spec)
continue
# check if pkgspec is installed (if possible for idempotence)
# localpkg
if spec.endswith('.rpm') and '://' not in spec:
if not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# get the pkg e:name-v-r.arch
envra = self.local_envra(spec)
if envra is None:
self.module.fail_json(msg="Failed to get nevra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# URL
if '://' in spec:
# download package so that we can check if it's already installed
with self.set_env_proxy():
package = fetch_file(self.module, spec)
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get nevra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# dep/pkgname - find it
if self.is_installed(repoq, spec):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
pkglist = self.what_provides(repoq, spec)
# FIXME..? may not be desirable to throw an exception here if a single package is missing
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
nothing_to_do = True
for pkg in pkglist:
if spec in pkgs['install'] and self.is_available(repoq, pkg):
nothing_to_do = False
break
# this contains the full NVR and spec could contain wildcards
# or virtual provides (like "python-*" or "smtp-daemon") while
# updates contains name only.
pkgname, _, _, _, _ = splitFilename(pkg)
if spec in pkgs['update'] and pkgname in updates:
nothing_to_do = False
will_update.add(spec)
# Massage the updates list
if spec != pkgname:
# For reporting what packages would be updated more
# succinctly
will_update_from_other_package[spec] = pkgname
break
if not self.is_installed(repoq, spec) and self.update_only:
res['results'].append("Packages providing %s not installed due to update_only specified" % spec)
continue
if nothing_to_do:
res['results'].append("All packages providing %s are up to date" % spec)
continue
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['results'].append("The following packages have pending transactions: %s" % ", ".join(conflicts))
res['rc'] = 128 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# check_mode output
to_update = []
for w in will_update:
if w.startswith('@'):
to_update.append((w, None))
elif w not in updates:
other_pkg = will_update_from_other_package[w]
to_update.append(
(
w,
'because of (at least) %s-%s.%s from %s' % (
other_pkg,
updates[other_pkg]['version'],
updates[other_pkg]['dist'],
updates[other_pkg]['repo']
)
)
)
else:
to_update.append((w, '%s.%s from %s' % (updates[w]['version'], updates[w]['dist'], updates[w]['repo'])))
if self.update_only:
res['changes'] = dict(installed=[], updated=to_update)
else:
res['changes'] = dict(installed=pkgs['install'], updated=to_update)
if obsoletes:
res['obsoletes'] = obsoletes
# return results before we actually execute stuff
if self.module.check_mode:
if will_update or pkgs['install']:
res['changed'] = True
return res
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
# run commands
if cmd: # update all
rc, out, err = self.module.run_command(cmd)
res['changed'] = True
elif self.update_only:
if pkgs['update']:
cmd = self.yum_basecmd + ['update'] + pkgs['update']
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
elif pkgs['install'] or will_update and not self.update_only:
cmd = self.yum_basecmd + ['install'] + pkgs['install'] + pkgs['update']
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
res['rc'] = rc
res['msg'] += err
res['results'].append(out)
if rc:
res['failed'] = True
return res
def ensure(self, repoq):
pkgs = self.names
# autoremove was provided without `name`
if not self.names and self.autoremove:
pkgs = []
self.state = 'absent'
if self.conf_file and os.path.exists(self.conf_file):
self.yum_basecmd += ['-c', self.conf_file]
if repoq:
repoq += ['-c', self.conf_file]
if self.skip_broken:
self.yum_basecmd.extend(['--skip-broken'])
if self.disablerepo:
self.yum_basecmd.extend(['--disablerepo=%s' % ','.join(self.disablerepo)])
if self.enablerepo:
self.yum_basecmd.extend(['--enablerepo=%s' % ','.join(self.enablerepo)])
if self.enable_plugin:
self.yum_basecmd.extend(['--enableplugin', ','.join(self.enable_plugin)])
if self.disable_plugin:
self.yum_basecmd.extend(['--disableplugin', ','.join(self.disable_plugin)])
if self.exclude:
e_cmd = ['--exclude=%s' % ','.join(self.exclude)]
self.yum_basecmd.extend(e_cmd)
if self.disable_excludes:
self.yum_basecmd.extend(['--disableexcludes=%s' % self.disable_excludes])
if self.download_only:
self.yum_basecmd.extend(['--downloadonly'])
if self.download_dir:
self.yum_basecmd.extend(['--downloaddir=%s' % self.download_dir])
if self.releasever:
self.yum_basecmd.extend(['--releasever=%s' % self.releasever])
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
e_cmd = ['--installroot=%s' % self.installroot]
self.yum_basecmd.extend(e_cmd)
if self.state in ('installed', 'present', 'latest'):
""" The need of this entire if conditional has to be changed
this function is the ensure function that is called
in the main section.
This conditional tends to disable/enable repo for
install present latest action, same actually
can be done for remove and absent action
As solution I would advice to cal
try: self.yum_base.repos.disableRepo(disablerepo)
and
try: self.yum_base.repos.enableRepo(enablerepo)
right before any yum_cmd is actually called regardless
of yum action.
Please note that enable/disablerepo options are general
options, this means that we can call those with any action
option. https://linux.die.net/man/8/yum
This docstring will be removed together when issue: #21619
will be solved.
This has been triggered by: #19587
"""
if self.update_cache:
self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
try:
current_repos = self.yum_base.repos.repos.keys()
if self.enablerepo:
try:
new_repos = self.yum_base.repos.repos.keys()
for i in new_repos:
if i not in current_repos:
rid = self.yum_base.repos.getRepo(i)
a = rid.repoXML.repoid # nopep8 - https://github.com/ansible/ansible/pull/21475#pullrequestreview-22404868
current_repos = new_repos
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error setting/accessing repos: %s" % to_native(e))
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error accessing repos: %s" % to_native(e))
if self.state == 'latest' or self.update_only:
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
if self.security:
self.yum_basecmd.append('--security')
if self.bugfix:
self.yum_basecmd.append('--bugfix')
res = self.latest(pkgs, repoq)
elif self.state in ('installed', 'present'):
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
res = self.install(pkgs, repoq)
elif self.state in ('removed', 'absent'):
res = self.remove(pkgs, repoq)
else:
# should be caught by AnsibleModule argument_spec
self.module.fail_json(
msg="we should never get here unless this all failed",
changed=False,
results='',
errors='unexpected state'
)
return res
@staticmethod
def has_yum():
return HAS_YUM_PYTHON
def run(self):
"""
actually execute the module code backend
"""
error_msgs = []
if not HAS_RPM_PYTHON:
error_msgs.append('The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
if not HAS_YUM_PYTHON:
error_msgs.append('The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
self.wait_for_lock()
if error_msgs:
self.module.fail_json(msg='. '.join(error_msgs))
# fedora will redirect yum to dnf, which has incompatibilities
# with how this module expects yum to operate. If yum-deprecated
# is available, use that instead to emulate the old behaviors.
if self.module.get_bin_path('yum-deprecated'):
yumbin = self.module.get_bin_path('yum-deprecated')
else:
yumbin = self.module.get_bin_path('yum')
# need debug level 2 to get 'Nothing to do' for groupinstall.
self.yum_basecmd = [yumbin, '-d', '2', '-y']
if self.update_cache and not self.names and not self.list:
rc, stdout, stderr = self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
if rc == 0:
self.module.exit_json(
changed=False,
msg="Cache updated",
rc=rc,
results=[]
)
else:
self.module.exit_json(
changed=False,
msg="Failed to update cache",
rc=rc,
results=[stderr],
)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.install_repoquery and not repoquerybin and not self.module.check_mode:
yum_path = self.module.get_bin_path('yum')
if yum_path:
if self.releasever:
self.module.run_command('%s -y install yum-utils --releasever %s' % (yum_path, self.releasever))
else:
self.module.run_command('%s -y install yum-utils' % yum_path)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.list:
if not repoquerybin:
self.module.fail_json(msg="repoquery is required to use list= with this module. Please install the yum-utils package.")
results = {'results': self.list_stuff(repoquerybin, self.list)}
else:
# If rhn-plugin is installed and no rhn-certificate is available on
# the system then users will see an error message using the yum API.
# Use repoquery in those cases.
repoquery = None
try:
yum_plugins = self.yum_base.plugins._plugins
except AttributeError:
pass
else:
if 'rhnplugin' in yum_plugins:
if repoquerybin:
repoquery = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.installroot != '/':
repoquery.extend(['--installroot', self.installroot])
if self.disable_excludes:
# repoquery does not support --disableexcludes,
# so make a temp copy of yum.conf and get rid of the 'exclude=' line there
try:
with open('/etc/yum.conf', 'r') as f:
content = f.readlines()
tmp_conf_file = tempfile.NamedTemporaryFile(dir=self.module.tmpdir, delete=False)
self.module.add_cleanup_file(tmp_conf_file.name)
tmp_conf_file.writelines([c for c in content if not c.startswith("exclude=")])
tmp_conf_file.close()
except Exception as e:
self.module.fail_json(msg="Failure setting up repoquery: %s" % to_native(e))
repoquery.extend(['-c', tmp_conf_file.name])
results = self.ensure(repoquery)
if repoquery:
results['msg'] = '%s %s' % (
results.get('msg', ''),
'Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.'
)
self.module.exit_json(**results)
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'yum', 'yum4', 'dnf'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = YumModule(module)
module_implementation.run()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,628 |
ansible-galaxy installs collection to path ansible-playbook doesn't use if collections_path ends with 'ansible_collections'
|
##### SUMMARY
<!--- Explain the problem briefly below -->
I have a collection that contains two roles. I have installed the collection from a private GitHub repo using `ansible-galaxy` via a requirements.yml file. I have a playbook that calls the collection, the two roles from the collection and a third party role.
I have tried multiple ways to refer to the collection roles in the playbook but each time it errors `ERROR! the role 'X' was not found in <search paths>`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/home/ansible_collections']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/ansible_roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have installed my collection `mynamespace.my_collection` which simply contains two roles `mynamespace.my_role1` and `mynamespace.my_role2`. This is listed in the `requirements.yml` file detailed below and is pulled from our private GitHub repo. The requirements are installed using:
```
ansible-galaxy collection install -r /home/ansible/requirements.yml --force
ansible-galaxy role install -r /home/ansible/requirements.yml --force
```
The playbook is run using `ansible-playbook play.yml`. I have tried defining the collection roles in the playbook in all ways I can think of including:
* `mynamespace.my_collection.my_role1`
* `mynamespace.my_role1`
* `my_role1`
<!--- Paste example playbooks or commands between quotes below -->
`play.yml`
```yaml
---
- hosts: all
collections:
- mynamespace.my_collection
roles:
- mynamespace.my_collection.my_role1
- mynamespace.my_collection.my_role2
- geerlingguy.repo-remi
```
`requirements.yml`
```yaml
---
collections:
- name: [email protected]:mynamespace/my_collection.git
roles:
- name: geerlingguy.repo-remi
version: "2.0.1"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
According to [this link](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html#comparing-standalone-roles-to-collection-roles) the FQCN for the role should be valid and the playbook should complete.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` errors saying that the role could not be found. I did post this over at [StackOverflow](https://stackoverflow.com/q/64836917/14638922). I absolutely expected someone to call me out on something I'd done wrong at this point but that hasn't happened yet. My understanding (I've only been using Ansible for a week or two) is that this should work.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible
The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- mynamespace.my_collection.my_role1
^ here
```
|
https://github.com/ansible/ansible/issues/72628
|
https://github.com/ansible/ansible/pull/72648
|
5157a92139b04fef32d38498815084a27adcd758
|
d22804c4fbb85010c4589836cd59284c2cf11f9e
| 2020-11-15T12:57:32Z |
python
| 2020-12-15T00:30:13Z |
changelogs/fragments/colleciton_flex_ac_dir_paths.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,628 |
ansible-galaxy installs collection to path ansible-playbook doesn't use if collections_path ends with 'ansible_collections'
|
##### SUMMARY
<!--- Explain the problem briefly below -->
I have a collection that contains two roles. I have installed the collection from a private GitHub repo using `ansible-galaxy` via a requirements.yml file. I have a playbook that calls the collection, the two roles from the collection and a third party role.
I have tried multiple ways to refer to the collection roles in the playbook but each time it errors `ERROR! the role 'X' was not found in <search paths>`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/home/ansible_collections']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/ansible_roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have installed my collection `mynamespace.my_collection` which simply contains two roles `mynamespace.my_role1` and `mynamespace.my_role2`. This is listed in the `requirements.yml` file detailed below and is pulled from our private GitHub repo. The requirements are installed using:
```
ansible-galaxy collection install -r /home/ansible/requirements.yml --force
ansible-galaxy role install -r /home/ansible/requirements.yml --force
```
The playbook is run using `ansible-playbook play.yml`. I have tried defining the collection roles in the playbook in all ways I can think of including:
* `mynamespace.my_collection.my_role1`
* `mynamespace.my_role1`
* `my_role1`
<!--- Paste example playbooks or commands between quotes below -->
`play.yml`
```yaml
---
- hosts: all
collections:
- mynamespace.my_collection
roles:
- mynamespace.my_collection.my_role1
- mynamespace.my_collection.my_role2
- geerlingguy.repo-remi
```
`requirements.yml`
```yaml
---
collections:
- name: [email protected]:mynamespace/my_collection.git
roles:
- name: geerlingguy.repo-remi
version: "2.0.1"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
According to [this link](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html#comparing-standalone-roles-to-collection-roles) the FQCN for the role should be valid and the playbook should complete.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` errors saying that the role could not be found. I did post this over at [StackOverflow](https://stackoverflow.com/q/64836917/14638922). I absolutely expected someone to call me out on something I'd done wrong at this point but that hasn't happened yet. My understanding (I've only been using Ansible for a week or two) is that this should work.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible
The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- mynamespace.my_collection.my_role1
^ here
```
|
https://github.com/ansible/ansible/issues/72628
|
https://github.com/ansible/ansible/pull/72648
|
5157a92139b04fef32d38498815084a27adcd758
|
d22804c4fbb85010c4589836cd59284c2cf11f9e
| 2020-11-15T12:57:32Z |
python
| 2020-12-15T00:30:13Z |
lib/ansible/collections/list.py
|
# (c) 2019 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from collections import defaultdict
from ansible.errors import AnsibleError
from ansible.collections import is_collection_path
from ansible.module_utils._text import to_bytes
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
display = Display()
def list_valid_collection_paths(search_paths=None, warn=False):
"""
Filter out non existing or invalid search_paths for collections
:param search_paths: list of text-string paths, if none load default config
:param warn: display warning if search_path does not exist
:return: subset of original list
"""
if search_paths is None:
search_paths = []
search_paths.extend(AnsibleCollectionConfig.collection_paths)
for path in search_paths:
b_path = to_bytes(path)
if not os.path.exists(b_path):
# warn for missing, but not if default
if warn:
display.warning("The configured collection path {0} does not exist.".format(path))
continue
if not os.path.isdir(b_path):
if warn:
display.warning("The configured collection path {0}, exists, but it is not a directory.".format(path))
continue
yield path
def list_collection_dirs(search_paths=None, coll_filter=None):
"""
Return paths for the specific collections found in passed or configured search paths
:param search_paths: list of text-string paths, if none load default config
:param coll_filter: limit collections to just the specific namespace or collection, if None all are returned
:return: list of collection directory paths
"""
collection = None
namespace = None
if coll_filter is not None:
if '.' in coll_filter:
try:
(namespace, collection) = coll_filter.split('.')
except ValueError:
raise AnsibleError("Invalid collection pattern supplied: %s" % coll_filter)
else:
namespace = coll_filter
collections = defaultdict(dict)
for path in list_valid_collection_paths(search_paths):
b_path = to_bytes(path)
if os.path.isdir(b_path):
b_coll_root = to_bytes(os.path.join(path, 'ansible_collections'))
if os.path.exists(b_coll_root) and os.path.isdir(b_coll_root):
if namespace is None:
namespaces = os.listdir(b_coll_root)
else:
namespaces = [namespace]
for ns in namespaces:
b_namespace_dir = os.path.join(b_coll_root, to_bytes(ns))
if os.path.isdir(b_namespace_dir):
if collection is None:
colls = os.listdir(b_namespace_dir)
else:
colls = [collection]
for mycoll in colls:
# skip dupe collections as they will be masked in execution
if mycoll not in collections[ns]:
b_coll = to_bytes(mycoll)
b_coll_dir = os.path.join(b_namespace_dir, b_coll)
if is_collection_path(b_coll_dir):
collections[ns][mycoll] = b_coll_dir
yield b_coll_dir
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,628 |
ansible-galaxy installs collection to path ansible-playbook doesn't use if collections_path ends with 'ansible_collections'
|
##### SUMMARY
<!--- Explain the problem briefly below -->
I have a collection that contains two roles. I have installed the collection from a private GitHub repo using `ansible-galaxy` via a requirements.yml file. I have a playbook that calls the collection, the two roles from the collection and a third party role.
I have tried multiple ways to refer to the collection roles in the playbook but each time it errors `ERROR! the role 'X' was not found in <search paths>`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/home/ansible_collections']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/ansible_roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have installed my collection `mynamespace.my_collection` which simply contains two roles `mynamespace.my_role1` and `mynamespace.my_role2`. This is listed in the `requirements.yml` file detailed below and is pulled from our private GitHub repo. The requirements are installed using:
```
ansible-galaxy collection install -r /home/ansible/requirements.yml --force
ansible-galaxy role install -r /home/ansible/requirements.yml --force
```
The playbook is run using `ansible-playbook play.yml`. I have tried defining the collection roles in the playbook in all ways I can think of including:
* `mynamespace.my_collection.my_role1`
* `mynamespace.my_role1`
* `my_role1`
<!--- Paste example playbooks or commands between quotes below -->
`play.yml`
```yaml
---
- hosts: all
collections:
- mynamespace.my_collection
roles:
- mynamespace.my_collection.my_role1
- mynamespace.my_collection.my_role2
- geerlingguy.repo-remi
```
`requirements.yml`
```yaml
---
collections:
- name: [email protected]:mynamespace/my_collection.git
roles:
- name: geerlingguy.repo-remi
version: "2.0.1"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
According to [this link](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html#comparing-standalone-roles-to-collection-roles) the FQCN for the role should be valid and the playbook should complete.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` errors saying that the role could not be found. I did post this over at [StackOverflow](https://stackoverflow.com/q/64836917/14638922). I absolutely expected someone to call me out on something I'd done wrong at this point but that hasn't happened yet. My understanding (I've only been using Ansible for a week or two) is that this should work.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible
The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- mynamespace.my_collection.my_role1
^ here
```
|
https://github.com/ansible/ansible/issues/72628
|
https://github.com/ansible/ansible/pull/72648
|
5157a92139b04fef32d38498815084a27adcd758
|
d22804c4fbb85010c4589836cd59284c2cf11f9e
| 2020-11-15T12:57:32Z |
python
| 2020-12-15T00:30:13Z |
lib/ansible/utils/collection_loader/_collection_finder.py
|
# (c) 2019 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import os.path
import pkgutil
import re
import sys
# DO NOT add new non-stdlib import deps here, this loader is used by external tools (eg ansible-test import sanity)
# that only allow stdlib and module_utils
from ansible.module_utils.common.text.converters import to_native, to_text, to_bytes
from ansible.module_utils.six import string_types, PY3
from ._collection_config import AnsibleCollectionConfig
from contextlib import contextmanager
from types import ModuleType
try:
from importlib import import_module
except ImportError:
def import_module(name):
__import__(name)
return sys.modules[name]
try:
from importlib import reload as reload_module
except ImportError:
# 2.7 has a global reload function instead...
reload_module = reload # pylint:disable=undefined-variable
# NB: this supports import sanity test providing a different impl
try:
from ._collection_meta import _meta_yml_to_dict
except ImportError:
_meta_yml_to_dict = None
if not hasattr(__builtins__, 'ModuleNotFoundError'):
# this was introduced in Python 3.6
ModuleNotFoundError = ImportError
PB_EXTENSIONS = ('.yml', '.yaml')
class _AnsibleCollectionFinder:
def __init__(self, paths=None, scan_sys_paths=True):
# TODO: accept metadata loader override
self._ansible_pkg_path = to_native(os.path.dirname(to_bytes(sys.modules['ansible'].__file__)))
if isinstance(paths, string_types):
paths = [paths]
elif paths is None:
paths = []
# expand any placeholders in configured paths
paths = [os.path.expanduser(to_native(p, errors='surrogate_or_strict')) for p in paths]
if scan_sys_paths:
# append all sys.path entries with an ansible_collections package
for path in sys.path:
if (
path not in paths and
os.path.isdir(to_bytes(
os.path.join(path, 'ansible_collections'),
errors='surrogate_or_strict',
))
):
paths.append(path)
self._n_configured_paths = paths
self._n_cached_collection_paths = None
self._n_cached_collection_qualified_paths = None
self._n_playbook_paths = []
@classmethod
def _remove(cls):
for mps in sys.meta_path:
if isinstance(mps, _AnsibleCollectionFinder):
sys.meta_path.remove(mps)
# remove any path hooks that look like ours
for ph in sys.path_hooks:
if hasattr(ph, '__self__') and isinstance(ph.__self__, _AnsibleCollectionFinder):
sys.path_hooks.remove(ph)
# zap any cached path importer cache entries that might refer to us
sys.path_importer_cache.clear()
AnsibleCollectionConfig._collection_finder = None
# validate via the public property that we really killed it
if AnsibleCollectionConfig.collection_finder is not None:
raise AssertionError('_AnsibleCollectionFinder remove did not reset AnsibleCollectionConfig.collection_finder')
def _install(self):
self._remove()
sys.meta_path.insert(0, self)
sys.path_hooks.insert(0, self._ansible_collection_path_hook)
AnsibleCollectionConfig.collection_finder = self
def _ansible_collection_path_hook(self, path):
path = to_native(path)
interesting_paths = self._n_cached_collection_qualified_paths
if not interesting_paths:
interesting_paths = [os.path.join(p, 'ansible_collections') for p in
self._n_collection_paths]
interesting_paths.insert(0, self._ansible_pkg_path)
self._n_cached_collection_qualified_paths = interesting_paths
if any(path.startswith(p) for p in interesting_paths):
return _AnsiblePathHookFinder(self, path)
raise ImportError('not interested')
@property
def _n_collection_paths(self):
paths = self._n_cached_collection_paths
if not paths:
self._n_cached_collection_paths = paths = self._n_playbook_paths + self._n_configured_paths
return paths
def set_playbook_paths(self, playbook_paths):
if isinstance(playbook_paths, string_types):
playbook_paths = [playbook_paths]
# track visited paths; we have to preserve the dir order as-passed in case there are duplicate collections (first one wins)
added_paths = set()
# de-dupe
self._n_playbook_paths = [os.path.join(to_native(p), 'collections') for p in playbook_paths if not (p in added_paths or added_paths.add(p))]
self._n_cached_collection_paths = None
# HACK: playbook CLI sets this relatively late, so we've already loaded some packages whose paths might depend on this. Fix those up.
# NB: this should NOT be used for late additions; ideally we'd fix the playbook dir setup earlier in Ansible init
# to prevent this from occurring
for pkg in ['ansible_collections', 'ansible_collections.ansible']:
self._reload_hack(pkg)
def _reload_hack(self, fullname):
m = sys.modules.get(fullname)
if not m:
return
reload_module(m)
def find_module(self, fullname, path=None):
# Figure out what's being asked for, and delegate to a special-purpose loader
split_name = fullname.split('.')
toplevel_pkg = split_name[0]
module_to_find = split_name[-1]
part_count = len(split_name)
if toplevel_pkg not in ['ansible', 'ansible_collections']:
# not interested in anything other than ansible_collections (and limited cases under ansible)
return None
# sanity check what we're getting from import, canonicalize path values
if part_count == 1:
if path:
raise ValueError('path should not be specified for top-level packages (trying to find {0})'.format(fullname))
else:
# seed the path to the configured collection roots
path = self._n_collection_paths
if part_count > 1 and path is None:
raise ValueError('path must be specified for subpackages (trying to find {0})'.format(fullname))
# NB: actual "find"ing is delegated to the constructors on the various loaders; they'll ImportError if not found
try:
if toplevel_pkg == 'ansible':
# something under the ansible package, delegate to our internal loader in case of redirections
return _AnsibleInternalRedirectLoader(fullname=fullname, path_list=path)
if part_count == 1:
return _AnsibleCollectionRootPkgLoader(fullname=fullname, path_list=path)
if part_count == 2: # ns pkg eg, ansible_collections, ansible_collections.somens
return _AnsibleCollectionNSPkgLoader(fullname=fullname, path_list=path)
elif part_count == 3: # collection pkg eg, ansible_collections.somens.somecoll
return _AnsibleCollectionPkgLoader(fullname=fullname, path_list=path)
# anything below the collection
return _AnsibleCollectionLoader(fullname=fullname, path_list=path)
except ImportError:
# TODO: log attempt to load context
return None
# Implements a path_hook finder for iter_modules (since it's only path based). This finder does not need to actually
# function as a finder in most cases, since our meta_path finder is consulted first for *almost* everything, except
# pkgutil.iter_modules, and under py2, pkgutil.get_data if the parent package passed has not been loaded yet.
class _AnsiblePathHookFinder:
def __init__(self, collection_finder, pathctx):
# when called from a path_hook, find_module doesn't usually get the path arg, so this provides our context
self._pathctx = to_native(pathctx)
self._collection_finder = collection_finder
if PY3:
# cache the native FileFinder (take advantage of its filesystem cache for future find/load requests)
self._file_finder = None
# class init is fun- this method has a self arg that won't get used
def _get_filefinder_path_hook(self=None):
_file_finder_hook = None
if PY3:
# try to find the FileFinder hook to call for fallback path-based imports in Py3
_file_finder_hook = [ph for ph in sys.path_hooks if 'FileFinder' in repr(ph)]
if len(_file_finder_hook) != 1:
raise Exception('need exactly one FileFinder import hook (found {0})'.format(len(_file_finder_hook)))
_file_finder_hook = _file_finder_hook[0]
return _file_finder_hook
_filefinder_path_hook = _get_filefinder_path_hook()
def find_module(self, fullname, path=None):
# we ignore the passed in path here- use what we got from the path hook init
split_name = fullname.split('.')
toplevel_pkg = split_name[0]
if toplevel_pkg == 'ansible_collections':
# collections content? delegate to the collection finder
return self._collection_finder.find_module(fullname, path=[self._pathctx])
else:
# Something else; we'd normally restrict this to `ansible` descendent modules so that any weird loader
# behavior that arbitrary Python modules have can be serviced by those loaders. In some dev/test
# scenarios (eg a venv under a collection) our path_hook signs us up to load non-Ansible things, and
# it's too late by the time we've reached this point, but also too expensive for the path_hook to figure
# out what we *shouldn't* be loading with the limited info it has. So we'll just delegate to the
# normal path-based loader as best we can to service it. This also allows us to take advantage of Python's
# built-in FS caching and byte-compilation for most things.
if PY3:
# create or consult our cached file finder for this path
if not self._file_finder:
try:
self._file_finder = _AnsiblePathHookFinder._filefinder_path_hook(self._pathctx)
except ImportError:
# FUTURE: log at a high logging level? This is normal for things like python36.zip on the path, but
# might not be in some other situation...
return None
spec = self._file_finder.find_spec(fullname)
if not spec:
return None
return spec.loader
else:
# call py2's internal loader
return pkgutil.ImpImporter(self._pathctx).find_module(fullname)
def iter_modules(self, prefix):
# NB: this currently represents only what's on disk, and does not handle package redirection
return _iter_modules_impl([self._pathctx], prefix)
def __repr__(self):
return "{0}(path='{1}')".format(self.__class__.__name__, self._pathctx)
class _AnsibleCollectionPkgLoaderBase:
_allows_package_code = False
def __init__(self, fullname, path_list=None):
self._fullname = fullname
self._redirect_module = None
self._split_name = fullname.split('.')
self._rpart_name = fullname.rpartition('.')
self._parent_package_name = self._rpart_name[0] # eg ansible_collections for ansible_collections.somens, '' for toplevel
self._package_to_load = self._rpart_name[2] # eg somens for ansible_collections.somens
self._source_code_path = None
self._decoded_source = None
self._compiled_code = None
self._validate_args()
self._candidate_paths = self._get_candidate_paths([to_native(p) for p in path_list])
self._subpackage_search_paths = self._get_subpackage_search_paths(self._candidate_paths)
self._validate_final()
# allow subclasses to validate args and sniff split values before we start digging around
def _validate_args(self):
if self._split_name[0] != 'ansible_collections':
raise ImportError('this loader can only load packages from the ansible_collections package, not {0}'.format(self._fullname))
# allow subclasses to customize candidate path filtering
def _get_candidate_paths(self, path_list):
return [os.path.join(p, self._package_to_load) for p in path_list]
# allow subclasses to customize finding paths
def _get_subpackage_search_paths(self, candidate_paths):
# filter candidate paths for existence (NB: silently ignoring package init code and same-named modules)
return [p for p in candidate_paths if os.path.isdir(to_bytes(p))]
# allow subclasses to customize state validation/manipulation before we return the loader instance
def _validate_final(self):
return
@staticmethod
@contextmanager
def _new_or_existing_module(name, **kwargs):
# handle all-or-nothing sys.modules creation/use-existing/delete-on-exception-if-created behavior
created_module = False
module = sys.modules.get(name)
try:
if not module:
module = ModuleType(name)
created_module = True
sys.modules[name] = module
# always override the values passed, except name (allow reference aliasing)
for attr, value in kwargs.items():
setattr(module, attr, value)
yield module
except Exception:
if created_module:
if sys.modules.get(name):
sys.modules.pop(name)
raise
# basic module/package location support
# NB: this does not support distributed packages!
@staticmethod
def _module_file_from_path(leaf_name, path):
has_code = True
package_path = os.path.join(to_native(path), to_native(leaf_name))
module_path = None
# if the submodule is a package, assemble valid submodule paths, but stop looking for a module
if os.path.isdir(to_bytes(package_path)):
# is there a package init?
module_path = os.path.join(package_path, '__init__.py')
if not os.path.isfile(to_bytes(module_path)):
module_path = os.path.join(package_path, '__synthetic__')
has_code = False
else:
module_path = package_path + '.py'
package_path = None
if not os.path.isfile(to_bytes(module_path)):
raise ImportError('{0} not found at {1}'.format(leaf_name, path))
return module_path, has_code, package_path
def load_module(self, fullname):
# short-circuit redirect; we've already imported the redirected module, so just alias it and return it
if self._redirect_module:
sys.modules[self._fullname] = self._redirect_module
return self._redirect_module
# we're actually loading a module/package
module_attrs = dict(
__loader__=self,
__file__=self.get_filename(fullname),
__package__=self._parent_package_name # sane default for non-packages
)
# eg, I am a package
if self._subpackage_search_paths is not None: # empty is legal
module_attrs['__path__'] = self._subpackage_search_paths
module_attrs['__package__'] = fullname # per PEP366
with self._new_or_existing_module(fullname, **module_attrs) as module:
# execute the module's code in its namespace
code_obj = self.get_code(fullname)
if code_obj is not None: # things like NS packages that can't have code on disk will return None
exec(code_obj, module.__dict__)
return module
def is_package(self, fullname):
if fullname != self._fullname:
raise ValueError('this loader cannot answer is_package for {0}, only {1}'.format(fullname, self._fullname))
return self._subpackage_search_paths is not None
def get_source(self, fullname):
if self._decoded_source:
return self._decoded_source
if fullname != self._fullname:
raise ValueError('this loader cannot load source for {0}, only {1}'.format(fullname, self._fullname))
if not self._source_code_path:
return None
# FIXME: what do we want encoding/newline requirements to be?
self._decoded_source = self.get_data(self._source_code_path)
return self._decoded_source
def get_data(self, path):
if not path:
raise ValueError('a path must be specified')
# TODO: ensure we're being asked for a path below something we own
# TODO: try to handle redirects internally?
if not path[0] == '/':
# relative to current package, search package paths if possible (this may not be necessary)
# candidate_paths = [os.path.join(ssp, path) for ssp in self._subpackage_search_paths]
raise ValueError('relative resource paths not supported')
else:
candidate_paths = [path]
for p in candidate_paths:
b_path = to_bytes(p)
if os.path.isfile(b_path):
with open(b_path, 'rb') as fd:
return fd.read()
# HACK: if caller asks for __init__.py and the parent dir exists, return empty string (this keep consistency
# with "collection subpackages don't require __init__.py" working everywhere with get_data
elif b_path.endswith(b'__init__.py') and os.path.isdir(os.path.dirname(b_path)):
return ''
return None
def _synthetic_filename(self, fullname):
return '<ansible_synthetic_collection_package>'
def get_filename(self, fullname):
if fullname != self._fullname:
raise ValueError('this loader cannot find files for {0}, only {1}'.format(fullname, self._fullname))
filename = self._source_code_path
if not filename and self.is_package(fullname):
if len(self._subpackage_search_paths) == 1:
filename = os.path.join(self._subpackage_search_paths[0], '__synthetic__')
else:
filename = self._synthetic_filename(fullname)
return filename
def get_code(self, fullname):
if self._compiled_code:
return self._compiled_code
# this may or may not be an actual filename, but it's the value we'll use for __file__
filename = self.get_filename(fullname)
if not filename:
filename = '<string>'
source_code = self.get_source(fullname)
# for things like synthetic modules that really have no source on disk, don't return a code object at all
# vs things like an empty package init (which has an empty string source on disk)
if source_code is None:
return None
self._compiled_code = compile(source=source_code, filename=filename, mode='exec', flags=0, dont_inherit=True)
return self._compiled_code
def iter_modules(self, prefix):
return _iter_modules_impl(self._subpackage_search_paths, prefix)
def __repr__(self):
return '{0}(path={1})'.format(self.__class__.__name__, self._subpackage_search_paths or self._source_code_path)
class _AnsibleCollectionRootPkgLoader(_AnsibleCollectionPkgLoaderBase):
def _validate_args(self):
super(_AnsibleCollectionRootPkgLoader, self)._validate_args()
if len(self._split_name) != 1:
raise ImportError('this loader can only load the ansible_collections toplevel package, not {0}'.format(self._fullname))
# Implements Ansible's custom namespace package support.
# The ansible_collections package and one level down (collections namespaces) are Python namespace packages
# that search across all configured collection roots. The collection package (two levels down) is the first one found
# on the configured collection root path, and Python namespace package aggregation is not allowed at or below
# the collection. Implements implicit package (package dir) support for both Py2/3. Package init code is ignored
# by this loader.
class _AnsibleCollectionNSPkgLoader(_AnsibleCollectionPkgLoaderBase):
def _validate_args(self):
super(_AnsibleCollectionNSPkgLoader, self)._validate_args()
if len(self._split_name) != 2:
raise ImportError('this loader can only load collections namespace packages, not {0}'.format(self._fullname))
def _validate_final(self):
# special-case the `ansible` namespace, since `ansible.builtin` is magical
if not self._subpackage_search_paths and self._package_to_load != 'ansible':
raise ImportError('no {0} found in {1}'.format(self._package_to_load, self._candidate_paths))
# handles locating the actual collection package and associated metadata
class _AnsibleCollectionPkgLoader(_AnsibleCollectionPkgLoaderBase):
def _validate_args(self):
super(_AnsibleCollectionPkgLoader, self)._validate_args()
if len(self._split_name) != 3:
raise ImportError('this loader can only load collection packages, not {0}'.format(self._fullname))
def _validate_final(self):
if self._split_name[1:3] == ['ansible', 'builtin']:
# we don't want to allow this one to have on-disk search capability
self._subpackage_search_paths = []
elif not self._subpackage_search_paths:
raise ImportError('no {0} found in {1}'.format(self._package_to_load, self._candidate_paths))
else:
# only search within the first collection we found
self._subpackage_search_paths = [self._subpackage_search_paths[0]]
def load_module(self, fullname):
if not _meta_yml_to_dict:
raise ValueError('ansible.utils.collection_loader._meta_yml_to_dict is not set')
module = super(_AnsibleCollectionPkgLoader, self).load_module(fullname)
module._collection_meta = {}
# TODO: load collection metadata, cache in __loader__ state
collection_name = '.'.join(self._split_name[1:3])
if collection_name == 'ansible.builtin':
# ansible.builtin is a synthetic collection, get its routing config from the Ansible distro
ansible_pkg_path = os.path.dirname(import_module('ansible').__file__)
metadata_path = os.path.join(ansible_pkg_path, 'config/ansible_builtin_runtime.yml')
with open(to_bytes(metadata_path), 'rb') as fd:
raw_routing = fd.read()
else:
b_routing_meta_path = to_bytes(os.path.join(module.__path__[0], 'meta/runtime.yml'))
if os.path.isfile(b_routing_meta_path):
with open(b_routing_meta_path, 'rb') as fd:
raw_routing = fd.read()
else:
raw_routing = ''
try:
if raw_routing:
routing_dict = _meta_yml_to_dict(raw_routing, (collection_name, 'runtime.yml'))
module._collection_meta = self._canonicalize_meta(routing_dict)
except Exception as ex:
raise ValueError('error parsing collection metadata: {0}'.format(to_native(ex)))
AnsibleCollectionConfig.on_collection_load.fire(collection_name=collection_name, collection_path=os.path.dirname(module.__file__))
return module
def _canonicalize_meta(self, meta_dict):
# TODO: rewrite import keys and all redirect targets that start with .. (current namespace) and . (current collection)
# OR we could do it all on the fly?
# if not meta_dict:
# return {}
#
# ns_name = '.'.join(self._split_name[0:2])
# collection_name = '.'.join(self._split_name[0:3])
#
# #
# for routing_type, routing_type_dict in iteritems(meta_dict.get('plugin_routing', {})):
# for plugin_key, plugin_dict in iteritems(routing_type_dict):
# redirect = plugin_dict.get('redirect', '')
# if redirect.startswith('..'):
# redirect = redirect[2:]
action_groups = meta_dict.pop('action_groups', {})
meta_dict['action_groups'] = {}
for group_name in action_groups:
for action_name in action_groups[group_name]:
if action_name in meta_dict['action_groups']:
meta_dict['action_groups'][action_name].append(group_name)
else:
meta_dict['action_groups'][action_name] = [group_name]
return meta_dict
# loads everything under a collection, including handling redirections defined by the collection
class _AnsibleCollectionLoader(_AnsibleCollectionPkgLoaderBase):
# HACK: stash this in a better place
_redirected_package_map = {}
_allows_package_code = True
def _validate_args(self):
super(_AnsibleCollectionLoader, self)._validate_args()
if len(self._split_name) < 4:
raise ValueError('this loader is only for sub-collection modules/packages, not {0}'.format(self._fullname))
def _get_candidate_paths(self, path_list):
if len(path_list) != 1 and self._split_name[1:3] != ['ansible', 'builtin']:
raise ValueError('this loader requires exactly one path to search')
return path_list
def _get_subpackage_search_paths(self, candidate_paths):
collection_name = '.'.join(self._split_name[1:3])
collection_meta = _get_collection_metadata(collection_name)
# check for explicit redirection, as well as ancestor package-level redirection (only load the actual code once!)
redirect = None
explicit_redirect = False
routing_entry = _nested_dict_get(collection_meta, ['import_redirection', self._fullname])
if routing_entry:
redirect = routing_entry.get('redirect')
if redirect:
explicit_redirect = True
else:
redirect = _get_ancestor_redirect(self._redirected_package_map, self._fullname)
# NB: package level redirection requires hooking all future imports beneath the redirected source package
# in order to ensure sanity on future relative imports. We always import everything under its "real" name,
# then add a sys.modules entry with the redirected name using the same module instance. If we naively imported
# the source for each redirection, most submodules would import OK, but we'd have N runtime copies of the module
# (one for each name), and relative imports that ascend above the redirected package would break (since they'd
# see the redirected ancestor package contents instead of the package where they actually live).
if redirect:
# FIXME: wrap this so we can be explicit about a failed redirection
self._redirect_module = import_module(redirect)
if explicit_redirect and hasattr(self._redirect_module, '__path__') and self._redirect_module.__path__:
# if the import target looks like a package, store its name so we can rewrite future descendent loads
self._redirected_package_map[self._fullname] = redirect
# if we redirected, don't do any further custom package logic
return None
# we're not doing a redirect- try to find what we need to actually load a module/package
# this will raise ImportError if we can't find the requested module/package at all
if not candidate_paths:
# noplace to look, just ImportError
raise ImportError('package has no paths')
found_path, has_code, package_path = self._module_file_from_path(self._package_to_load, candidate_paths[0])
# still here? we found something to load...
if has_code:
self._source_code_path = found_path
if package_path:
return [package_path] # always needs to be a list
return None
# This loader only answers for intercepted Ansible Python modules. Normal imports will fail here and be picked up later
# by our path_hook importer (which proxies the built-in import mechanisms, allowing normal caching etc to occur)
class _AnsibleInternalRedirectLoader:
def __init__(self, fullname, path_list):
self._redirect = None
split_name = fullname.split('.')
toplevel_pkg = split_name[0]
module_to_load = split_name[-1]
if toplevel_pkg != 'ansible':
raise ImportError('not interested')
builtin_meta = _get_collection_metadata('ansible.builtin')
routing_entry = _nested_dict_get(builtin_meta, ['import_redirection', fullname])
if routing_entry:
self._redirect = routing_entry.get('redirect')
if not self._redirect:
raise ImportError('not redirected, go ask path_hook')
def load_module(self, fullname):
# since we're delegating to other loaders, this should only be called for internal redirects where we answered
# find_module with this loader, in which case we'll just directly import the redirection target, insert it into
# sys.modules under the name it was requested by, and return the original module.
# should never see this
if not self._redirect:
raise ValueError('no redirect found for {0}'.format(fullname))
# FIXME: smuggle redirection context, provide warning/error that we tried and failed to redirect
mod = import_module(self._redirect)
sys.modules[fullname] = mod
return mod
class AnsibleCollectionRef:
# FUTURE: introspect plugin loaders to get these dynamically?
VALID_REF_TYPES = frozenset(to_text(r) for r in ['action', 'become', 'cache', 'callback', 'cliconf', 'connection',
'doc_fragments', 'filter', 'httpapi', 'inventory', 'lookup',
'module_utils', 'modules', 'netconf', 'role', 'shell', 'strategy',
'terminal', 'test', 'vars', 'playbook'])
# FIXME: tighten this up to match Python identifier reqs, etc
VALID_COLLECTION_NAME_RE = re.compile(to_text(r'^(\w+)\.(\w+)$'))
VALID_SUBDIRS_RE = re.compile(to_text(r'^\w+(\.\w+)*$'))
VALID_FQCR_RE = re.compile(to_text(r'^\w+\.\w+\.\w+(\.\w+)*$')) # can have 0-N included subdirs as well
def __init__(self, collection_name, subdirs, resource, ref_type):
"""
Create an AnsibleCollectionRef from components
:param collection_name: a collection name of the form 'namespace.collectionname'
:param subdirs: optional subdir segments to be appended below the plugin type (eg, 'subdir1.subdir2')
:param resource: the name of the resource being references (eg, 'mymodule', 'someaction', 'a_role')
:param ref_type: the type of the reference, eg 'module', 'role', 'doc_fragment'
"""
collection_name = to_text(collection_name, errors='strict')
if subdirs is not None:
subdirs = to_text(subdirs, errors='strict')
resource = to_text(resource, errors='strict')
ref_type = to_text(ref_type, errors='strict')
if not self.is_valid_collection_name(collection_name):
raise ValueError('invalid collection name (must be of the form namespace.collection): {0}'.format(to_native(collection_name)))
if ref_type not in self.VALID_REF_TYPES:
raise ValueError('invalid collection ref_type: {0}'.format(ref_type))
self.collection = collection_name
if subdirs:
if not re.match(self.VALID_SUBDIRS_RE, subdirs):
raise ValueError('invalid subdirs entry: {0} (must be empty/None or of the form subdir1.subdir2)'.format(to_native(subdirs)))
self.subdirs = subdirs
else:
self.subdirs = u''
self.resource = resource
self.ref_type = ref_type
package_components = [u'ansible_collections', self.collection]
fqcr_components = [self.collection]
self.n_python_collection_package_name = to_native('.'.join(package_components))
if self.ref_type == u'role':
package_components.append(u'roles')
elif self.ref_type == u'playbook':
package_components.append(u'playbooks')
else:
# we assume it's a plugin
package_components += [u'plugins', self.ref_type]
if self.subdirs:
package_components.append(self.subdirs)
fqcr_components.append(self.subdirs)
if self.ref_type in (u'role', u'playbook'):
# playbooks and roles are their own resource
package_components.append(self.resource)
fqcr_components.append(self.resource)
self.n_python_package_name = to_native('.'.join(package_components))
self._fqcr = u'.'.join(fqcr_components)
def __repr__(self):
return 'AnsibleCollectionRef(collection={0!r}, subdirs={1!r}, resource={2!r})'.format(self.collection, self.subdirs, self.resource)
@property
def fqcr(self):
return self._fqcr
@staticmethod
def from_fqcr(ref, ref_type):
"""
Parse a string as a fully-qualified collection reference, raises ValueError if invalid
:param ref: collection reference to parse (a valid ref is of the form 'ns.coll.resource' or 'ns.coll.subdir1.subdir2.resource')
:param ref_type: the type of the reference, eg 'module', 'role', 'doc_fragment'
:return: a populated AnsibleCollectionRef object
"""
# assuming the fq_name is of the form (ns).(coll).(optional_subdir_N).(resource_name),
# we split the resource name off the right, split ns and coll off the left, and we're left with any optional
# subdirs that need to be added back below the plugin-specific subdir we'll add. So:
# ns.coll.resource -> ansible_collections.ns.coll.plugins.(plugintype).resource
# ns.coll.subdir1.resource -> ansible_collections.ns.coll.plugins.subdir1.(plugintype).resource
# ns.coll.rolename -> ansible_collections.ns.coll.roles.rolename
if not AnsibleCollectionRef.is_valid_fqcr(ref):
raise ValueError('{0} is not a valid collection reference'.format(to_native(ref)))
ref = to_text(ref, errors='strict')
ref_type = to_text(ref_type, errors='strict')
ext = ''
if ref_type == u'playbook' and ref.endswith(PB_EXTENSIONS):
resource_splitname = ref.rsplit(u'.', 2)
package_remnant = resource_splitname[0]
resource = resource_splitname[1]
ext = '.' + resource_splitname[2]
else:
resource_splitname = ref.rsplit(u'.', 1)
package_remnant = resource_splitname[0]
resource = resource_splitname[1]
# split the left two components of the collection package name off, anything remaining is plugin-type
# specific subdirs to be added back on below the plugin type
package_splitname = package_remnant.split(u'.', 2)
if len(package_splitname) == 3:
subdirs = package_splitname[2]
else:
subdirs = u''
collection_name = u'.'.join(package_splitname[0:2])
return AnsibleCollectionRef(collection_name, subdirs, resource + ext, ref_type)
@staticmethod
def try_parse_fqcr(ref, ref_type):
"""
Attempt to parse a string as a fully-qualified collection reference, returning None on failure (instead of raising an error)
:param ref: collection reference to parse (a valid ref is of the form 'ns.coll.resource' or 'ns.coll.subdir1.subdir2.resource')
:param ref_type: the type of the reference, eg 'module', 'role', 'doc_fragment'
:return: a populated AnsibleCollectionRef object on successful parsing, else None
"""
try:
return AnsibleCollectionRef.from_fqcr(ref, ref_type)
except ValueError:
pass
@staticmethod
def legacy_plugin_dir_to_plugin_type(legacy_plugin_dir_name):
"""
Utility method to convert from a PluginLoader dir name to a plugin ref_type
:param legacy_plugin_dir_name: PluginLoader dir name (eg, 'action_plugins', 'library')
:return: the corresponding plugin ref_type (eg, 'action', 'role')
"""
legacy_plugin_dir_name = to_text(legacy_plugin_dir_name)
plugin_type = legacy_plugin_dir_name.replace(u'_plugins', u'')
if plugin_type == u'library':
plugin_type = u'modules'
if plugin_type not in AnsibleCollectionRef.VALID_REF_TYPES:
raise ValueError('{0} cannot be mapped to a valid collection ref type'.format(to_native(legacy_plugin_dir_name)))
return plugin_type
@staticmethod
def is_valid_fqcr(ref, ref_type=None):
"""
Validates if is string is a well-formed fully-qualified collection reference (does not look up the collection itself)
:param ref: candidate collection reference to validate (a valid ref is of the form 'ns.coll.resource' or 'ns.coll.subdir1.subdir2.resource')
:param ref_type: optional reference type to enable deeper validation, eg 'module', 'role', 'doc_fragment'
:return: True if the collection ref passed is well-formed, False otherwise
"""
ref = to_text(ref)
if not ref_type:
return bool(re.match(AnsibleCollectionRef.VALID_FQCR_RE, ref))
return bool(AnsibleCollectionRef.try_parse_fqcr(ref, ref_type))
@staticmethod
def is_valid_collection_name(collection_name):
"""
Validates if the given string is a well-formed collection name (does not look up the collection itself)
:param collection_name: candidate collection name to validate (a valid name is of the form 'ns.collname')
:return: True if the collection name passed is well-formed, False otherwise
"""
collection_name = to_text(collection_name)
return bool(re.match(AnsibleCollectionRef.VALID_COLLECTION_NAME_RE, collection_name))
def _get_collection_playbook_path(playbook):
acr = AnsibleCollectionRef.try_parse_fqcr(playbook, u'playbook')
if acr:
try:
# get_collection_path
pkg = import_module(acr.n_python_collection_package_name)
except (IOError, ModuleNotFoundError) as e:
# leaving e as debug target, even though not used in normal code
pkg = None
if pkg:
cpath = os.path.join(sys.modules[acr.n_python_collection_package_name].__file__.replace('__synthetic__', 'playbooks'))
path = os.path.join(cpath, to_native(acr.resource))
if os.path.exists(to_bytes(path)):
return acr.resource, path, acr.collection
elif not acr.resource.endswith(PB_EXTENSIONS):
for ext in PB_EXTENSIONS:
path = os.path.join(cpath, to_native(acr.resource + ext))
if os.path.exists(to_bytes(path)):
return acr.resource, path, acr.collection
return None
def _get_collection_role_path(role_name, collection_list=None):
return _get_collection_resource_path(role_name, u'role', collection_list)
def _get_collection_resource_path(name, ref_type, collection_list=None):
if ref_type == u'playbook':
# they are handled a bit diff due to 'extension variance' and no collection_list
return _get_collection_playbook_path(name)
acr = AnsibleCollectionRef.try_parse_fqcr(name, ref_type)
if acr:
# looks like a valid qualified collection ref; skip the collection_list
collection_list = [acr.collection]
subdirs = acr.subdirs
resource = acr.resource
elif not collection_list:
return None # not a FQ and no collection search list spec'd, nothing to do
else:
resource = name # treat as unqualified, loop through the collection search list to try and resolve
subdirs = ''
for collection_name in collection_list:
try:
acr = AnsibleCollectionRef(collection_name=collection_name, subdirs=subdirs, resource=resource, ref_type=ref_type)
# FIXME: error handling/logging; need to catch any import failures and move along
pkg = import_module(acr.n_python_package_name)
if pkg is not None:
# the package is now loaded, get the collection's package and ask where it lives
path = os.path.dirname(to_bytes(sys.modules[acr.n_python_package_name].__file__, errors='surrogate_or_strict'))
return resource, to_text(path, errors='surrogate_or_strict'), collection_name
except (IOError, ModuleNotFoundError) as e:
continue
except Exception as ex:
# FIXME: pick out typical import errors first, then error logging
continue
return None
def _get_collection_name_from_path(path):
"""
Return the containing collection name for a given path, or None if the path is not below a configured collection, or
the collection cannot be loaded (eg, the collection is masked by another of the same name higher in the configured
collection roots).
:param path: path to evaluate for collection containment
:return: collection name or None
"""
# ensure we compare full paths since pkg path will be abspath
path = to_native(os.path.abspath(to_bytes(path)))
path_parts = path.split('/')
if path_parts.count('ansible_collections') != 1:
return None
ac_pos = path_parts.index('ansible_collections')
# make sure it's followed by at least a namespace and collection name
if len(path_parts) < ac_pos + 3:
return None
candidate_collection_name = '.'.join(path_parts[ac_pos + 1:ac_pos + 3])
try:
# we've got a name for it, now see if the path prefix matches what the loader sees
imported_pkg_path = to_native(os.path.dirname(to_bytes(import_module('ansible_collections.' + candidate_collection_name).__file__)))
except ImportError:
return None
# reassemble the original path prefix up the collection name, and it should match what we just imported. If not
# this is probably a collection root that's not configured.
original_path_prefix = os.path.join('/', *path_parts[0:ac_pos + 3])
imported_pkg_path = to_native(os.path.abspath(to_bytes(imported_pkg_path)))
if original_path_prefix != imported_pkg_path:
return None
return candidate_collection_name
def _get_import_redirect(collection_meta_dict, fullname):
if not collection_meta_dict:
return None
return _nested_dict_get(collection_meta_dict, ['import_redirection', fullname, 'redirect'])
def _get_ancestor_redirect(redirected_package_map, fullname):
# walk the requested module's ancestor packages to see if any have been previously redirected
cur_pkg = fullname
while cur_pkg:
cur_pkg = cur_pkg.rpartition('.')[0]
ancestor_redirect = redirected_package_map.get(cur_pkg)
if ancestor_redirect:
# rewrite the prefix on fullname so we import the target first, then alias it
redirect = ancestor_redirect + fullname[len(cur_pkg):]
return redirect
return None
def _nested_dict_get(root_dict, key_list):
cur_value = root_dict
for key in key_list:
cur_value = cur_value.get(key)
if not cur_value:
return None
return cur_value
def _iter_modules_impl(paths, prefix=''):
# NB: this currently only iterates what's on disk- redirected modules are not considered
if not prefix:
prefix = ''
else:
prefix = to_native(prefix)
# yield (module_loader, name, ispkg) for each module/pkg under path
# TODO: implement ignore/silent catch for unreadable?
for b_path in map(to_bytes, paths):
if not os.path.isdir(b_path):
continue
for b_basename in sorted(os.listdir(b_path)):
b_candidate_module_path = os.path.join(b_path, b_basename)
if os.path.isdir(b_candidate_module_path):
# exclude things that obviously aren't Python package dirs
# FIXME: this dir is adjustable in py3.8+, check for it
if b'.' in b_basename or b_basename == b'__pycache__':
continue
# TODO: proper string handling?
yield prefix + to_native(b_basename), True
else:
# FIXME: match builtin ordering for package/dir/file, support compiled?
if b_basename.endswith(b'.py') and b_basename != b'__init__.py':
yield prefix + to_native(os.path.splitext(b_basename)[0]), False
def _get_collection_metadata(collection_name):
collection_name = to_native(collection_name)
if not collection_name or not isinstance(collection_name, string_types) or len(collection_name.split('.')) != 2:
raise ValueError('collection_name must be a non-empty string of the form namespace.collection')
try:
collection_pkg = import_module('ansible_collections.' + collection_name)
except ImportError:
raise ValueError('unable to locate collection {0}'.format(collection_name))
_collection_meta = getattr(collection_pkg, '_collection_meta', None)
if _collection_meta is None:
raise ValueError('collection metadata was not loaded for collection {0}'.format(collection_name))
return _collection_meta
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,628 |
ansible-galaxy installs collection to path ansible-playbook doesn't use if collections_path ends with 'ansible_collections'
|
##### SUMMARY
<!--- Explain the problem briefly below -->
I have a collection that contains two roles. I have installed the collection from a private GitHub repo using `ansible-galaxy` via a requirements.yml file. I have a playbook that calls the collection, the two roles from the collection and a third party role.
I have tried multiple ways to refer to the collection roles in the playbook but each time it errors `ERROR! the role 'X' was not found in <search paths>`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/home/ansible_collections']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/ansible_roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have installed my collection `mynamespace.my_collection` which simply contains two roles `mynamespace.my_role1` and `mynamespace.my_role2`. This is listed in the `requirements.yml` file detailed below and is pulled from our private GitHub repo. The requirements are installed using:
```
ansible-galaxy collection install -r /home/ansible/requirements.yml --force
ansible-galaxy role install -r /home/ansible/requirements.yml --force
```
The playbook is run using `ansible-playbook play.yml`. I have tried defining the collection roles in the playbook in all ways I can think of including:
* `mynamespace.my_collection.my_role1`
* `mynamespace.my_role1`
* `my_role1`
<!--- Paste example playbooks or commands between quotes below -->
`play.yml`
```yaml
---
- hosts: all
collections:
- mynamespace.my_collection
roles:
- mynamespace.my_collection.my_role1
- mynamespace.my_collection.my_role2
- geerlingguy.repo-remi
```
`requirements.yml`
```yaml
---
collections:
- name: [email protected]:mynamespace/my_collection.git
roles:
- name: geerlingguy.repo-remi
version: "2.0.1"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
According to [this link](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html#comparing-standalone-roles-to-collection-roles) the FQCN for the role should be valid and the playbook should complete.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` errors saying that the role could not be found. I did post this over at [StackOverflow](https://stackoverflow.com/q/64836917/14638922). I absolutely expected someone to call me out on something I'd done wrong at this point but that hasn't happened yet. My understanding (I've only been using Ansible for a week or two) is that this should work.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible
The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- mynamespace.my_collection.my_role1
^ here
```
|
https://github.com/ansible/ansible/issues/72628
|
https://github.com/ansible/ansible/pull/72648
|
5157a92139b04fef32d38498815084a27adcd758
|
d22804c4fbb85010c4589836cd59284c2cf11f9e
| 2020-11-15T12:57:32Z |
python
| 2020-12-15T00:30:13Z |
test/integration/targets/collections/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_COLLECTIONS_PATH=$PWD/collection_root_user:$PWD/collection_root_sys
export ANSIBLE_GATHERING=explicit
export ANSIBLE_GATHER_SUBSET=minimal
export ANSIBLE_HOST_PATTERN_MISMATCH=error
export ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH=0
# FUTURE: just use INVENTORY_PATH as-is once ansible-test sets the right dir
ipath=../../$(basename "${INVENTORY_PATH:-../../inventory}")
export INVENTORY_PATH="$ipath"
echo "--- validating callbacks"
# validate FQ callbacks in ansible-playbook
ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible-playbook noop.yml | grep "usercallback says ok"
# use adhoc for the rest of these tests, must force it to load other callbacks
export ANSIBLE_LOAD_CALLBACK_PLUGINS=1
# validate redirected callback
ANSIBLE_CALLBACKS_ENABLED=formerly_core_callback ansible localhost -m debug 2>&1 | grep -- "usercallback says ok"
## validate missing redirected callback
ANSIBLE_CALLBACKS_ENABLED=formerly_core_missing_callback ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'formerly_core_missing_callback'"
## validate redirected + removed callback (fatal)
ANSIBLE_CALLBACKS_ENABLED=formerly_core_removed_callback ansible localhost -m debug 2>&1 | grep -- "testns.testcoll.removedcallback has been removed"
# validate avoiding duplicate loading of callback, even if using diff names
[ "$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback,formerly_core_callback ansible localhost -m debug 2>&1 | grep -c 'usercallback says ok')" = "1" ]
# ensure non existing callback does not crash ansible
ANSIBLE_CALLBACKS_ENABLED=charlie.gomez.notme ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'charlie.gomez.notme'"
unset ANSIBLE_LOAD_CALLBACK_PLUGINS
# adhoc normally shouldn't load non-default plugins- let's be sure
output=$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible localhost -m debug)
if [[ "${output}" =~ "usercallback says ok" ]]; then echo fail; exit 1; fi
echo "--- validating docs"
# test documentation
ansible-doc testns.testcoll.testmodule -vvv | grep -- "- normal_doc_frag"
# same with symlink
ln -s "${PWD}/testcoll2" ./collection_root_sys/ansible_collections/testns/testcoll2
ansible-doc testns.testcoll2.testmodule2 -vvv | grep "Test module"
# now test we can list with symlink
ansible-doc -l -vvv| grep "testns.testcoll2.testmodule2"
echo "testing bad doc_fragments (expected ERROR message follows)"
# test documentation failure
ansible-doc testns.testcoll.testmodule_bad_docfrags -vvv 2>&1 | grep -- "unknown doc_fragment"
echo "--- validating default collection"
# test adhoc default collection resolution (use unqualified collection module with playbook dir under its collection)
echo "testing adhoc default collection support with explicit playbook dir"
ANSIBLE_PLAYBOOK_DIR=./collection_root_user/ansible_collections/testns/testcoll ansible localhost -m testmodule
# we need multiple plays, and conditional import_playbook is noisy and causes problems, so choose here which one to use...
if [[ ${INVENTORY_PATH} == *.winrm ]]; then
export TEST_PLAYBOOK=windows.yml
else
export TEST_PLAYBOOK=posix.yml
echo "testing default collection support"
ansible-playbook -i "${INVENTORY_PATH}" collection_root_user/ansible_collections/testns/testcoll/playbooks/default_collection_playbook.yml "$@"
fi
echo "--- validating collections support in playbooks/roles"
# run test playbooks
ansible-playbook -i "${INVENTORY_PATH}" -v "${TEST_PLAYBOOK}" "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@"
fi
echo "--- validating bypass_host_loop with collection search"
ansible-playbook -i host1,host2, -v test_bypass_host_loop.yml "$@"
echo "--- validating inventory"
# test collection inventories
ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
# base invocation tests
ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@"
# run playbook from collection, test default again, but with FQCN
ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook.yml "$@"
# run playbook from collection, test default again, but with FQCN and no extension
ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook "$@"
# run playbook that imports from collection
ansible-playbook -i "${INVENTORY_PATH}" import_collection_pb.yml "$@"
fi
# test collection inventories
ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@"
# test adjacent with --playbook-dir
export ANSIBLE_COLLECTIONS_PATH=''
ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=1 ansible-inventory --list --export --playbook-dir=. -v "$@"
# use an inventory source with caching enabled
ansible-playbook -i a.statichost.yml -i ./cache.statichost.yml -v check_populated_inventory.yml
# Check that the inventory source with caching enabled was stored
if [[ "$(find ./inventory_cache -type f ! -path "./inventory_cache/.keep" | wc -l)" -ne "1" ]]; then
echo "Failed to find the expected single cache"
exit 1
fi
CACHEFILE="$(find ./inventory_cache -type f ! -path './inventory_cache/.keep')"
if [[ $CACHEFILE != ./inventory_cache/prefix_* ]]; then
echo "Unexpected cache file"
exit 1
fi
# Check the cache for the expected hosts
if [[ "$(grep -wc "cache_host_a" "$CACHEFILE")" -ne "1" ]]; then
echo "Failed to cache host as expected"
exit 1
fi
if [[ "$(grep -wc "dynamic_host_a" "$CACHEFILE")" -ne "0" ]]; then
echo "Cached an incorrect source"
exit 1
fi
./vars_plugin_tests.sh
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,628 |
ansible-galaxy installs collection to path ansible-playbook doesn't use if collections_path ends with 'ansible_collections'
|
##### SUMMARY
<!--- Explain the problem briefly below -->
I have a collection that contains two roles. I have installed the collection from a private GitHub repo using `ansible-galaxy` via a requirements.yml file. I have a playbook that calls the collection, the two roles from the collection and a third party role.
I have tried multiple ways to refer to the collection roles in the playbook but each time it errors `ERROR! the role 'X' was not found in <search paths>`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/home/ansible_collections']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/ansible_roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have installed my collection `mynamespace.my_collection` which simply contains two roles `mynamespace.my_role1` and `mynamespace.my_role2`. This is listed in the `requirements.yml` file detailed below and is pulled from our private GitHub repo. The requirements are installed using:
```
ansible-galaxy collection install -r /home/ansible/requirements.yml --force
ansible-galaxy role install -r /home/ansible/requirements.yml --force
```
The playbook is run using `ansible-playbook play.yml`. I have tried defining the collection roles in the playbook in all ways I can think of including:
* `mynamespace.my_collection.my_role1`
* `mynamespace.my_role1`
* `my_role1`
<!--- Paste example playbooks or commands between quotes below -->
`play.yml`
```yaml
---
- hosts: all
collections:
- mynamespace.my_collection
roles:
- mynamespace.my_collection.my_role1
- mynamespace.my_collection.my_role2
- geerlingguy.repo-remi
```
`requirements.yml`
```yaml
---
collections:
- name: [email protected]:mynamespace/my_collection.git
roles:
- name: geerlingguy.repo-remi
version: "2.0.1"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
According to [this link](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html#comparing-standalone-roles-to-collection-roles) the FQCN for the role should be valid and the playbook should complete.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` errors saying that the role could not be found. I did post this over at [StackOverflow](https://stackoverflow.com/q/64836917/14638922). I absolutely expected someone to call me out on something I'd done wrong at this point but that hasn't happened yet. My understanding (I've only been using Ansible for a week or two) is that this should work.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible
The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- mynamespace.my_collection.my_role1
^ here
```
|
https://github.com/ansible/ansible/issues/72628
|
https://github.com/ansible/ansible/pull/72648
|
5157a92139b04fef32d38498815084a27adcd758
|
d22804c4fbb85010c4589836cd59284c2cf11f9e
| 2020-11-15T12:57:32Z |
python
| 2020-12-15T00:30:13Z |
test/units/utils/collection_loader/test_collection_loader.py
|
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pkgutil
import pytest
import re
import sys
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.compat.importlib import import_module
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import (
_AnsibleCollectionFinder, _AnsibleCollectionLoader, _AnsibleCollectionNSPkgLoader, _AnsibleCollectionPkgLoader,
_AnsibleCollectionPkgLoaderBase, _AnsibleCollectionRootPkgLoader, _AnsiblePathHookFinder,
_get_collection_name_from_path, _get_collection_role_path, _get_collection_metadata, _iter_modules_impl
)
from ansible.utils.collection_loader._collection_config import _EventSource
from units.compat.mock import MagicMock, NonCallableMagicMock, patch
# fixture to ensure we always clean up the import stuff when we're done
@pytest.fixture(autouse=True, scope='function')
def teardown(*args, **kwargs):
yield
reset_collections_loader_state()
# BEGIN STANDALONE TESTS - these exercise behaviors of the individual components without the import machinery
def test_finder_setup():
# ensure scalar path is listified
f = _AnsibleCollectionFinder(paths='/bogus/bogus')
assert isinstance(f._n_collection_paths, list)
# ensure sys.path paths that have an ansible_collections dir are added to the end of the collections paths
with patch.object(sys, 'path', ['/bogus', default_test_collection_paths[1], '/morebogus', default_test_collection_paths[0]]):
f = _AnsibleCollectionFinder(paths=['/explicit', '/other'])
assert f._n_collection_paths == ['/explicit', '/other', default_test_collection_paths[1], default_test_collection_paths[0]]
configured_paths = ['/bogus']
playbook_paths = ['/playbookdir']
f = _AnsibleCollectionFinder(paths=configured_paths)
assert f._n_collection_paths == configured_paths
f.set_playbook_paths(playbook_paths)
assert f._n_collection_paths == extend_paths(playbook_paths, 'collections') + configured_paths
# ensure scalar playbook_paths gets listified
f.set_playbook_paths(playbook_paths[0])
assert f._n_collection_paths == extend_paths(playbook_paths, 'collections') + configured_paths
def test_finder_not_interested():
f = get_default_finder()
assert f.find_module('nothanks') is None
assert f.find_module('nothanks.sub', path=['/bogus/dir']) is None
def test_finder_ns():
# ensure we can still load ansible_collections and ansible_collections.ansible when they don't exist on disk
f = _AnsibleCollectionFinder(paths=['/bogus/bogus'])
loader = f.find_module('ansible_collections')
assert isinstance(loader, _AnsibleCollectionRootPkgLoader)
loader = f.find_module('ansible_collections.ansible', path=['/bogus/bogus'])
assert isinstance(loader, _AnsibleCollectionNSPkgLoader)
f = get_default_finder()
loader = f.find_module('ansible_collections')
assert isinstance(loader, _AnsibleCollectionRootPkgLoader)
# path is not allowed for top-level
with pytest.raises(ValueError):
f.find_module('ansible_collections', path=['whatever'])
# path is required for subpackages
with pytest.raises(ValueError):
f.find_module('ansible_collections.whatever', path=None)
paths = [os.path.join(p, 'ansible_collections/nonexistns') for p in default_test_collection_paths]
# test missing
loader = f.find_module('ansible_collections.nonexistns', paths)
assert loader is None
# keep these up top to make sure the loader install/remove are working, since we rely on them heavily in the tests
def test_loader_remove():
fake_mp = [MagicMock(), _AnsibleCollectionFinder(), MagicMock(), _AnsibleCollectionFinder()]
fake_ph = [MagicMock().m1, MagicMock().m2, _AnsibleCollectionFinder()._ansible_collection_path_hook, NonCallableMagicMock]
# must nest until 2.6 compilation is totally donezo
with patch.object(sys, 'meta_path', fake_mp):
with patch.object(sys, 'path_hooks', fake_ph):
_AnsibleCollectionFinder()._remove()
assert len(sys.meta_path) == 2
# no AnsibleCollectionFinders on the meta path after remove is called
assert all((not isinstance(mpf, _AnsibleCollectionFinder) for mpf in sys.meta_path))
assert len(sys.path_hooks) == 3
# none of the remaining path hooks should point at an AnsibleCollectionFinder
assert all((not isinstance(ph.__self__, _AnsibleCollectionFinder) for ph in sys.path_hooks if hasattr(ph, '__self__')))
assert AnsibleCollectionConfig.collection_finder is None
def test_loader_install():
fake_mp = [MagicMock(), _AnsibleCollectionFinder(), MagicMock(), _AnsibleCollectionFinder()]
fake_ph = [MagicMock().m1, MagicMock().m2, _AnsibleCollectionFinder()._ansible_collection_path_hook, NonCallableMagicMock]
# must nest until 2.6 compilation is totally donezo
with patch.object(sys, 'meta_path', fake_mp):
with patch.object(sys, 'path_hooks', fake_ph):
f = _AnsibleCollectionFinder()
f._install()
assert len(sys.meta_path) == 3 # should have removed the existing ACFs and installed a new one
assert sys.meta_path[0] is f # at the front
# the rest of the meta_path should not be AnsibleCollectionFinders
assert all((not isinstance(mpf, _AnsibleCollectionFinder) for mpf in sys.meta_path[1:]))
assert len(sys.path_hooks) == 4 # should have removed the existing ACF path hooks and installed a new one
# the first path hook should be ours, make sure it's pointing at the right instance
assert hasattr(sys.path_hooks[0], '__self__') and sys.path_hooks[0].__self__ is f
# the rest of the path_hooks should not point at an AnsibleCollectionFinder
assert all((not isinstance(ph.__self__, _AnsibleCollectionFinder) for ph in sys.path_hooks[1:] if hasattr(ph, '__self__')))
assert AnsibleCollectionConfig.collection_finder is f
with pytest.raises(ValueError):
AnsibleCollectionConfig.collection_finder = f
def test_finder_coll():
f = get_default_finder()
tests = [
{'name': 'ansible_collections.testns.testcoll', 'test_paths': [default_test_collection_paths]},
{'name': 'ansible_collections.ansible.builtin', 'test_paths': [['/bogus'], default_test_collection_paths]},
]
# ensure finder works for legit paths and bogus paths
for test_dict in tests:
# splat the dict values to our locals
globals().update(test_dict)
parent_pkg = name.rpartition('.')[0]
for paths in test_paths:
paths = [os.path.join(p, parent_pkg.replace('.', '/')) for p in paths]
loader = f.find_module(name, path=paths)
assert isinstance(loader, _AnsibleCollectionPkgLoader)
def test_root_loader_not_interested():
with pytest.raises(ImportError):
_AnsibleCollectionRootPkgLoader('not_ansible_collections_toplevel', path_list=[])
with pytest.raises(ImportError):
_AnsibleCollectionRootPkgLoader('ansible_collections.somens', path_list=['/bogus'])
def test_root_loader():
name = 'ansible_collections'
# ensure this works even when ansible_collections doesn't exist on disk
for paths in [], default_test_collection_paths:
if name in sys.modules:
del sys.modules[name]
loader = _AnsibleCollectionRootPkgLoader(name, paths)
assert repr(loader).startswith('_AnsibleCollectionRootPkgLoader(path=')
module = loader.load_module(name)
assert module.__name__ == name
assert module.__path__ == [p for p in extend_paths(paths, name) if os.path.isdir(p)]
# even if the dir exists somewhere, this loader doesn't support get_data, so make __file__ a non-file
assert module.__file__ == '<ansible_synthetic_collection_package>'
assert module.__package__ == name
assert sys.modules.get(name) == module
def test_nspkg_loader_not_interested():
with pytest.raises(ImportError):
_AnsibleCollectionNSPkgLoader('not_ansible_collections_toplevel.something', path_list=[])
with pytest.raises(ImportError):
_AnsibleCollectionNSPkgLoader('ansible_collections.somens.somecoll', path_list=[])
def test_nspkg_loader_load_module():
# ensure the loader behaves on the toplevel and ansible packages for both legit and missing/bogus paths
for name in ['ansible_collections.ansible', 'ansible_collections.testns']:
parent_pkg = name.partition('.')[0]
module_to_load = name.rpartition('.')[2]
paths = extend_paths(default_test_collection_paths, parent_pkg)
existing_child_paths = [p for p in extend_paths(paths, module_to_load) if os.path.exists(p)]
if name in sys.modules:
del sys.modules[name]
loader = _AnsibleCollectionNSPkgLoader(name, path_list=paths)
assert repr(loader).startswith('_AnsibleCollectionNSPkgLoader(path=')
module = loader.load_module(name)
assert module.__name__ == name
assert isinstance(module.__loader__, _AnsibleCollectionNSPkgLoader)
assert module.__path__ == existing_child_paths
assert module.__package__ == name
assert module.__file__ == '<ansible_synthetic_collection_package>'
assert sys.modules.get(name) == module
def test_collpkg_loader_not_interested():
with pytest.raises(ImportError):
_AnsibleCollectionPkgLoader('not_ansible_collections', path_list=[])
with pytest.raises(ImportError):
_AnsibleCollectionPkgLoader('ansible_collections.ns', path_list=['/bogus/bogus'])
def test_collpkg_loader_load_module():
reset_collections_loader_state()
with patch('ansible.utils.collection_loader.AnsibleCollectionConfig') as p:
for name in ['ansible_collections.ansible.builtin', 'ansible_collections.testns.testcoll']:
parent_pkg = name.rpartition('.')[0]
module_to_load = name.rpartition('.')[2]
paths = extend_paths(default_test_collection_paths, parent_pkg)
existing_child_paths = [p for p in extend_paths(paths, module_to_load) if os.path.exists(p)]
is_builtin = 'ansible.builtin' in name
if name in sys.modules:
del sys.modules[name]
loader = _AnsibleCollectionPkgLoader(name, path_list=paths)
assert repr(loader).startswith('_AnsibleCollectionPkgLoader(path=')
module = loader.load_module(name)
assert module.__name__ == name
assert isinstance(module.__loader__, _AnsibleCollectionPkgLoader)
if is_builtin:
assert module.__path__ == []
else:
assert module.__path__ == [existing_child_paths[0]]
assert module.__package__ == name
if is_builtin:
assert module.__file__ == '<ansible_synthetic_collection_package>'
else:
assert module.__file__.endswith('__synthetic__') and os.path.isdir(os.path.dirname(module.__file__))
assert sys.modules.get(name) == module
assert hasattr(module, '_collection_meta') and isinstance(module._collection_meta, dict)
# FIXME: validate _collection_meta contents match what's on disk (or not)
# if the module has metadata, try loading it with busted metadata
if module._collection_meta:
_collection_finder = import_module('ansible.utils.collection_loader._collection_finder')
with patch.object(_collection_finder, '_meta_yml_to_dict', side_effect=Exception('bang')):
with pytest.raises(Exception) as ex:
_AnsibleCollectionPkgLoader(name, path_list=paths).load_module(name)
assert 'error parsing collection metadata' in str(ex.value)
def test_coll_loader():
with patch('ansible.utils.collection_loader.AnsibleCollectionConfig'):
with pytest.raises(ValueError):
# not a collection
_AnsibleCollectionLoader('ansible_collections')
with pytest.raises(ValueError):
# bogus paths
_AnsibleCollectionLoader('ansible_collections.testns.testcoll', path_list=[])
# FIXME: more
def test_path_hook_setup():
with patch.object(sys, 'path_hooks', []):
found_hook = None
pathhook_exc = None
try:
found_hook = _AnsiblePathHookFinder._get_filefinder_path_hook()
except Exception as phe:
pathhook_exc = phe
if PY3:
assert str(pathhook_exc) == 'need exactly one FileFinder import hook (found 0)'
else:
assert found_hook is None
assert repr(_AnsiblePathHookFinder(object(), '/bogus/path')) == "_AnsiblePathHookFinder(path='/bogus/path')"
def test_path_hook_importerror():
# ensure that AnsiblePathHookFinder.find_module swallows ImportError from path hook delegation on Py3, eg if the delegated
# path hook gets passed a file on sys.path (python36.zip)
reset_collections_loader_state()
path_to_a_file = os.path.join(default_test_collection_paths[0], 'ansible_collections/testns/testcoll/plugins/action/my_action.py')
# it's a bug if the following pops an ImportError...
assert _AnsiblePathHookFinder(_AnsibleCollectionFinder(), path_to_a_file).find_module('foo.bar.my_action') is None
def test_new_or_existing_module():
module_name = 'blar.test.module'
pkg_name = module_name.rpartition('.')[0]
# create new module case
nuke_module_prefix(module_name)
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name, __package__=pkg_name) as new_module:
# the module we just created should now exist in sys.modules
assert sys.modules.get(module_name) is new_module
assert new_module.__name__ == module_name
# the module should stick since we didn't raise an exception in the contextmgr
assert sys.modules.get(module_name) is new_module
# reuse existing module case
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name, __attr1__=42, blar='yo') as existing_module:
assert sys.modules.get(module_name) is new_module # should be the same module we created earlier
assert hasattr(existing_module, '__package__') and existing_module.__package__ == pkg_name
assert hasattr(existing_module, '__attr1__') and existing_module.__attr1__ == 42
assert hasattr(existing_module, 'blar') and existing_module.blar == 'yo'
# exception during update existing shouldn't zap existing module from sys.modules
with pytest.raises(ValueError) as ve:
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name) as existing_module:
err_to_raise = ValueError('bang')
raise err_to_raise
# make sure we got our error
assert ve.value is err_to_raise
# and that the module still exists
assert sys.modules.get(module_name) is existing_module
# test module removal after exception during creation
nuke_module_prefix(module_name)
with pytest.raises(ValueError) as ve:
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name) as new_module:
err_to_raise = ValueError('bang')
raise err_to_raise
# make sure we got our error
assert ve.value is err_to_raise
# and that the module was removed
assert sys.modules.get(module_name) is None
def test_iter_modules_impl():
modules_trailer = 'ansible_collections/testns/testcoll/plugins'
modules_pkg_prefix = modules_trailer.replace('/', '.') + '.'
modules_path = os.path.join(default_test_collection_paths[0], modules_trailer)
modules = list(_iter_modules_impl([modules_path], modules_pkg_prefix))
assert modules
assert set([('ansible_collections.testns.testcoll.plugins.action', True),
('ansible_collections.testns.testcoll.plugins.module_utils', True),
('ansible_collections.testns.testcoll.plugins.modules', True)]) == set(modules)
modules_trailer = 'ansible_collections/testns/testcoll/plugins/modules'
modules_pkg_prefix = modules_trailer.replace('/', '.') + '.'
modules_path = os.path.join(default_test_collection_paths[0], modules_trailer)
modules = list(_iter_modules_impl([modules_path], modules_pkg_prefix))
assert modules
assert len(modules) == 1
assert modules[0][0] == 'ansible_collections.testns.testcoll.plugins.modules.amodule' # name
assert modules[0][1] is False # is_pkg
# FIXME: more
# BEGIN IN-CIRCUIT TESTS - these exercise behaviors of the loader when wired up to the import machinery
def test_import_from_collection(monkeypatch):
collection_root = os.path.join(os.path.dirname(__file__), 'fixtures', 'collections')
collection_path = os.path.join(collection_root, 'ansible_collections/testns/testcoll/plugins/module_utils/my_util.py')
# THIS IS UNSTABLE UNDER A DEBUGGER
# the trace we're expecting to be generated when running the code below:
# answer = question()
expected_trace_log = [
(collection_path, 5, 'call'),
(collection_path, 6, 'line'),
(collection_path, 6, 'return'),
]
# define the collection root before any ansible code has been loaded
# otherwise config will have already been loaded and changing the environment will have no effect
monkeypatch.setenv('ANSIBLE_COLLECTIONS_PATH', collection_root)
finder = _AnsibleCollectionFinder(paths=[collection_root])
reset_collections_loader_state(finder)
from ansible_collections.testns.testcoll.plugins.module_utils.my_util import question
original_trace_function = sys.gettrace()
trace_log = []
if original_trace_function:
# enable tracing while preserving the existing trace function (coverage)
def my_trace_function(frame, event, arg):
trace_log.append((frame.f_code.co_filename, frame.f_lineno, event))
# the original trace function expects to have itself set as the trace function
sys.settrace(original_trace_function)
# call the original trace function
original_trace_function(frame, event, arg)
# restore our trace function
sys.settrace(my_trace_function)
return my_trace_function
else:
# no existing trace function, so our trace function is much simpler
def my_trace_function(frame, event, arg):
trace_log.append((frame.f_code.co_filename, frame.f_lineno, event))
return my_trace_function
sys.settrace(my_trace_function)
try:
# run a minimal amount of code while the trace is running
# adding more code here, including use of a context manager, will add more to our trace
answer = question()
finally:
sys.settrace(original_trace_function)
# make sure 'import ... as ...' works on builtin synthetic collections
# the following import is not supported (it tries to find module_utils in ansible.plugins)
# import ansible_collections.ansible.builtin.plugins.module_utils as c1
import ansible_collections.ansible.builtin.plugins.action as c2
import ansible_collections.ansible.builtin.plugins as c3
import ansible_collections.ansible.builtin as c4
import ansible_collections.ansible as c5
import ansible_collections as c6
# make sure 'import ...' works on builtin synthetic collections
import ansible_collections.ansible.builtin.plugins.module_utils
import ansible_collections.ansible.builtin.plugins.action
assert ansible_collections.ansible.builtin.plugins.action == c3.action == c2
import ansible_collections.ansible.builtin.plugins
assert ansible_collections.ansible.builtin.plugins == c4.plugins == c3
import ansible_collections.ansible.builtin
assert ansible_collections.ansible.builtin == c5.builtin == c4
import ansible_collections.ansible
assert ansible_collections.ansible == c6.ansible == c5
import ansible_collections
assert ansible_collections == c6
# make sure 'from ... import ...' works on builtin synthetic collections
from ansible_collections.ansible import builtin
from ansible_collections.ansible.builtin import plugins
assert builtin.plugins == plugins
from ansible_collections.ansible.builtin.plugins import action
from ansible_collections.ansible.builtin.plugins.action import command
assert action.command == command
from ansible_collections.ansible.builtin.plugins.module_utils import basic
from ansible_collections.ansible.builtin.plugins.module_utils.basic import AnsibleModule
assert basic.AnsibleModule == AnsibleModule
# make sure relative imports work from collections code
# these require __package__ to be set correctly
import ansible_collections.testns.testcoll.plugins.module_utils.my_other_util
import ansible_collections.testns.testcoll.plugins.action.my_action
# verify that code loaded from a collection does not inherit __future__ statements from the collection loader
if sys.version_info[0] == 2:
# if the collection code inherits the division future feature from the collection loader this will fail
assert answer == 1
else:
assert answer == 1.5
# verify that the filename and line number reported by the trace is correct
# this makes sure that collection loading preserves file paths and line numbers
assert trace_log == expected_trace_log
def test_eventsource():
es = _EventSource()
# fire when empty should succeed
es.fire(42)
handler1 = MagicMock()
handler2 = MagicMock()
es += handler1
es.fire(99, my_kwarg='blah')
handler1.assert_called_with(99, my_kwarg='blah')
es += handler2
es.fire(123, foo='bar')
handler1.assert_called_with(123, foo='bar')
handler2.assert_called_with(123, foo='bar')
es -= handler2
handler1.reset_mock()
handler2.reset_mock()
es.fire(123, foo='bar')
handler1.assert_called_with(123, foo='bar')
handler2.assert_not_called()
es -= handler1
handler1.reset_mock()
es.fire('blah', kwarg=None)
handler1.assert_not_called()
handler2.assert_not_called()
es -= handler1 # should succeed silently
handler_bang = MagicMock(side_effect=Exception('bang'))
es += handler_bang
with pytest.raises(Exception) as ex:
es.fire(123)
assert 'bang' in str(ex.value)
handler_bang.assert_called_with(123)
with pytest.raises(ValueError):
es += 42
def test_on_collection_load():
finder = get_default_finder()
reset_collections_loader_state(finder)
load_handler = MagicMock()
AnsibleCollectionConfig.on_collection_load += load_handler
m = import_module('ansible_collections.testns.testcoll')
load_handler.assert_called_once_with(collection_name='testns.testcoll', collection_path=os.path.dirname(m.__file__))
_meta = _get_collection_metadata('testns.testcoll')
assert _meta
# FIXME: compare to disk
finder = get_default_finder()
reset_collections_loader_state(finder)
AnsibleCollectionConfig.on_collection_load += MagicMock(side_effect=Exception('bang'))
with pytest.raises(Exception) as ex:
import_module('ansible_collections.testns.testcoll')
assert 'bang' in str(ex.value)
def test_default_collection_config():
finder = get_default_finder()
reset_collections_loader_state(finder)
assert AnsibleCollectionConfig.default_collection is None
AnsibleCollectionConfig.default_collection = 'foo.bar'
assert AnsibleCollectionConfig.default_collection == 'foo.bar'
def test_default_collection_detection():
finder = get_default_finder()
reset_collections_loader_state(finder)
# we're clearly not under a collection path
assert _get_collection_name_from_path('/') is None
# something that looks like a collection path but isn't importable by our finder
assert _get_collection_name_from_path('/foo/ansible_collections/bogusns/boguscoll/bar') is None
# legit, at the top of the collection
live_collection_path = os.path.join(os.path.dirname(__file__), 'fixtures/collections/ansible_collections/testns/testcoll')
assert _get_collection_name_from_path(live_collection_path) == 'testns.testcoll'
# legit, deeper inside the collection
live_collection_deep_path = os.path.join(live_collection_path, 'plugins/modules')
assert _get_collection_name_from_path(live_collection_deep_path) == 'testns.testcoll'
# this one should be hidden by the real testns.testcoll, so should not resolve
masked_collection_path = os.path.join(os.path.dirname(__file__), 'fixtures/collections_masked/ansible_collections/testns/testcoll')
assert _get_collection_name_from_path(masked_collection_path) is None
@pytest.mark.parametrize(
'role_name,collection_list,expected_collection_name,expected_path_suffix',
[
('some_role', ['testns.testcoll', 'ansible.bogus'], 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('testns.testcoll.some_role', ['ansible.bogus', 'testns.testcoll'], 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('testns.testcoll.some_role', [], 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('testns.testcoll.some_role', None, 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('some_role', [], None, None),
('some_role', None, None, None),
])
def test_collection_role_name_location(role_name, collection_list, expected_collection_name, expected_path_suffix):
finder = get_default_finder()
reset_collections_loader_state(finder)
expected_path = None
if expected_path_suffix:
expected_path = os.path.join(os.path.dirname(__file__), 'fixtures/collections/ansible_collections', expected_path_suffix)
found = _get_collection_role_path(role_name, collection_list)
if found:
assert found[0] == role_name.rpartition('.')[2]
assert found[1] == expected_path
assert found[2] == expected_collection_name
else:
assert expected_collection_name is None and expected_path_suffix is None
def test_bogus_imports():
finder = get_default_finder()
reset_collections_loader_state(finder)
# ensure ImportError on known-bogus imports
bogus_imports = ['bogus_toplevel', 'ansible_collections.bogusns', 'ansible_collections.testns.boguscoll',
'ansible_collections.testns.testcoll.bogussub', 'ansible_collections.ansible.builtin.bogussub']
for bogus_import in bogus_imports:
with pytest.raises(ImportError):
import_module(bogus_import)
def test_empty_vs_no_code():
finder = get_default_finder()
reset_collections_loader_state(finder)
from ansible_collections.testns import testcoll # synthetic package with no code on disk
from ansible_collections.testns.testcoll.plugins import module_utils # real package with empty code file
# ensure synthetic packages have no code object at all (prevent bogus coverage entries)
assert testcoll.__loader__.get_source(testcoll.__name__) is None
assert testcoll.__loader__.get_code(testcoll.__name__) is None
# ensure empty package inits do have a code object
assert module_utils.__loader__.get_source(module_utils.__name__) == b''
assert module_utils.__loader__.get_code(module_utils.__name__) is not None
def test_finder_playbook_paths():
finder = get_default_finder()
reset_collections_loader_state(finder)
import ansible_collections
import ansible_collections.ansible
import ansible_collections.testns
# ensure the package modules look like we expect
assert hasattr(ansible_collections, '__path__') and len(ansible_collections.__path__) > 0
assert hasattr(ansible_collections.ansible, '__path__') and len(ansible_collections.ansible.__path__) > 0
assert hasattr(ansible_collections.testns, '__path__') and len(ansible_collections.testns.__path__) > 0
# these shouldn't be visible yet, since we haven't added the playbook dir
with pytest.raises(ImportError):
import ansible_collections.ansible.playbook_adj_other
with pytest.raises(ImportError):
import ansible_collections.testns.playbook_adj_other
assert AnsibleCollectionConfig.playbook_paths == []
playbook_path_fixture_dir = os.path.join(os.path.dirname(__file__), 'fixtures/playbook_path')
# configure the playbook paths
AnsibleCollectionConfig.playbook_paths = [playbook_path_fixture_dir]
# playbook paths go to the front of the line
assert AnsibleCollectionConfig.collection_paths[0] == os.path.join(playbook_path_fixture_dir, 'collections')
# playbook paths should be updated on the existing root ansible_collections path, as well as on the 'ansible' namespace (but no others!)
assert ansible_collections.__path__[0] == os.path.join(playbook_path_fixture_dir, 'collections/ansible_collections')
assert ansible_collections.ansible.__path__[0] == os.path.join(playbook_path_fixture_dir, 'collections/ansible_collections/ansible')
assert all('playbook_path' not in p for p in ansible_collections.testns.__path__)
# should succeed since we fixed up the package path
import ansible_collections.ansible.playbook_adj_other
# should succeed since we didn't import freshns before hacking in the path
import ansible_collections.freshns.playbook_adj_other
# should fail since we've already imported something from this path and didn't fix up its package path
with pytest.raises(ImportError):
import ansible_collections.testns.playbook_adj_other
def test_toplevel_iter_modules():
finder = get_default_finder()
reset_collections_loader_state(finder)
modules = list(pkgutil.iter_modules(default_test_collection_paths, ''))
assert len(modules) == 1
assert modules[0][1] == 'ansible_collections'
def test_iter_modules_namespaces():
finder = get_default_finder()
reset_collections_loader_state(finder)
paths = extend_paths(default_test_collection_paths, 'ansible_collections')
modules = list(pkgutil.iter_modules(paths, 'ansible_collections.'))
assert len(modules) == 2
assert all(m[2] is True for m in modules)
assert all(isinstance(m[0], _AnsiblePathHookFinder) for m in modules)
assert set(['ansible_collections.testns', 'ansible_collections.ansible']) == set(m[1] for m in modules)
def test_collection_get_data():
finder = get_default_finder()
reset_collections_loader_state(finder)
# something that's there
d = pkgutil.get_data('ansible_collections.testns.testcoll', 'plugins/action/my_action.py')
assert b'hello from my_action.py' in d
# something that's not there
d = pkgutil.get_data('ansible_collections.testns.testcoll', 'bogus/bogus')
assert d is None
with pytest.raises(ValueError):
plugins_pkg = import_module('ansible_collections.ansible.builtin')
assert not os.path.exists(os.path.dirname(plugins_pkg.__file__))
d = pkgutil.get_data('ansible_collections.ansible.builtin', 'plugins/connection/local.py')
@pytest.mark.parametrize(
'ref,ref_type,expected_collection,expected_subdirs,expected_resource,expected_python_pkg_name',
[
('ns.coll.myaction', 'action', 'ns.coll', '', 'myaction', 'ansible_collections.ns.coll.plugins.action'),
('ns.coll.subdir1.subdir2.myaction', 'action', 'ns.coll', 'subdir1.subdir2', 'myaction', 'ansible_collections.ns.coll.plugins.action.subdir1.subdir2'),
('ns.coll.myrole', 'role', 'ns.coll', '', 'myrole', 'ansible_collections.ns.coll.roles.myrole'),
('ns.coll.subdir1.subdir2.myrole', 'role', 'ns.coll', 'subdir1.subdir2', 'myrole', 'ansible_collections.ns.coll.roles.subdir1.subdir2.myrole'),
])
def test_fqcr_parsing_valid(ref, ref_type, expected_collection,
expected_subdirs, expected_resource, expected_python_pkg_name):
assert AnsibleCollectionRef.is_valid_fqcr(ref, ref_type)
r = AnsibleCollectionRef.from_fqcr(ref, ref_type)
assert r.collection == expected_collection
assert r.subdirs == expected_subdirs
assert r.resource == expected_resource
assert r.n_python_package_name == expected_python_pkg_name
r = AnsibleCollectionRef.try_parse_fqcr(ref, ref_type)
assert r.collection == expected_collection
assert r.subdirs == expected_subdirs
assert r.resource == expected_resource
assert r.n_python_package_name == expected_python_pkg_name
@pytest.mark.parametrize(
'ref,ref_type,expected_error_type,expected_error_expression',
[
('no_dots_at_all_action', 'action', ValueError, 'is not a valid collection reference'),
('no_nscoll.myaction', 'action', ValueError, 'is not a valid collection reference'),
('ns.coll.myaction', 'bogus', ValueError, 'invalid collection ref_type'),
])
def test_fqcr_parsing_invalid(ref, ref_type, expected_error_type, expected_error_expression):
assert not AnsibleCollectionRef.is_valid_fqcr(ref, ref_type)
with pytest.raises(expected_error_type) as curerr:
AnsibleCollectionRef.from_fqcr(ref, ref_type)
assert re.search(expected_error_expression, str(curerr.value))
r = AnsibleCollectionRef.try_parse_fqcr(ref, ref_type)
assert r is None
@pytest.mark.parametrize(
'name,subdirs,resource,ref_type,python_pkg_name',
[
('ns.coll', None, 'res', 'doc_fragments', 'ansible_collections.ns.coll.plugins.doc_fragments'),
('ns.coll', 'subdir1', 'res', 'doc_fragments', 'ansible_collections.ns.coll.plugins.doc_fragments.subdir1'),
('ns.coll', 'subdir1.subdir2', 'res', 'action', 'ansible_collections.ns.coll.plugins.action.subdir1.subdir2'),
])
def test_collectionref_components_valid(name, subdirs, resource, ref_type, python_pkg_name):
x = AnsibleCollectionRef(name, subdirs, resource, ref_type)
assert x.collection == name
if subdirs:
assert x.subdirs == subdirs
else:
assert x.subdirs == ''
assert x.resource == resource
assert x.ref_type == ref_type
assert x.n_python_package_name == python_pkg_name
@pytest.mark.parametrize(
'dirname,expected_result',
[
('become_plugins', 'become'),
('cache_plugins', 'cache'),
('connection_plugins', 'connection'),
('library', 'modules'),
('filter_plugins', 'filter'),
('bogus_plugins', ValueError),
(None, ValueError)
]
)
def test_legacy_plugin_dir_to_plugin_type(dirname, expected_result):
if isinstance(expected_result, string_types):
assert AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(dirname) == expected_result
else:
with pytest.raises(expected_result):
AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(dirname)
@pytest.mark.parametrize(
'name,subdirs,resource,ref_type,expected_error_type,expected_error_expression',
[
('bad_ns', '', 'resource', 'action', ValueError, 'invalid collection name'),
('ns.coll.', '', 'resource', 'action', ValueError, 'invalid collection name'),
('ns.coll', 'badsubdir#', 'resource', 'action', ValueError, 'invalid subdirs entry'),
('ns.coll', 'badsubdir.', 'resource', 'action', ValueError, 'invalid subdirs entry'),
('ns.coll', '.badsubdir', 'resource', 'action', ValueError, 'invalid subdirs entry'),
('ns.coll', '', 'resource', 'bogus', ValueError, 'invalid collection ref_type'),
])
def test_collectionref_components_invalid(name, subdirs, resource, ref_type, expected_error_type, expected_error_expression):
with pytest.raises(expected_error_type) as curerr:
AnsibleCollectionRef(name, subdirs, resource, ref_type)
assert re.search(expected_error_expression, str(curerr.value))
# BEGIN TEST SUPPORT
default_test_collection_paths = [
os.path.join(os.path.dirname(__file__), 'fixtures', 'collections'),
os.path.join(os.path.dirname(__file__), 'fixtures', 'collections_masked'),
'/bogus/bogussub'
]
def get_default_finder():
return _AnsibleCollectionFinder(paths=default_test_collection_paths)
def extend_paths(path_list, suffix):
suffix = suffix.replace('.', '/')
return [os.path.join(p, suffix) for p in path_list]
def nuke_module_prefix(prefix):
for module_to_nuke in [m for m in sys.modules if m.startswith(prefix)]:
sys.modules.pop(module_to_nuke)
def reset_collections_loader_state(metapath_finder=None):
_AnsibleCollectionFinder._remove()
nuke_module_prefix('ansible_collections')
nuke_module_prefix('ansible.modules')
nuke_module_prefix('ansible.plugins')
# FIXME: better to move this someplace else that gets cleaned up automatically?
_AnsibleCollectionLoader._redirected_package_map = {}
AnsibleCollectionConfig._default_collection = None
AnsibleCollectionConfig._on_collection_load = _EventSource()
if metapath_finder:
metapath_finder._install()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,230 |
Storing labeled passwords in files
|
##### SUMMARY
I tried storing labeled passwords in a single files as described in: https://docs.ansible.com/ansible/latest/user_guide/vault.html#storing-passwords-in-files
However, it does not seem to work. It still expects a single password per file.
I did add `vault_id_match = True` to my ansible.cfg but without any effect.
Is there something I missed? The documentation seems to be new, was the feature not yet implemented?
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ansible/docs/docsite/rst/user_guide/vault.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10.1
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_VAULT_IDENTITY_LIST(/home/user/project/ansible.cfg) = ['all@passwords']
DEFAULT_VAULT_ID_MATCH(/home/user/project/ansible.cfg) = True
DEFAULT_VAULT_PASSWORD_FILE(/home/user/project/ansible.cfg) = /home/user/project/passwords
```
##### OS / ENVIRONMENT
Arch - Manjaro
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
|
https://github.com/ansible/ansible/issues/72230
|
https://github.com/ansible/ansible/pull/72938
|
0ba96d2be8b512db85b62dc2c6d4c33b77a2f1f0
|
8450858651f2c50153c38ba9273684a2ec2d7335
| 2020-10-15T21:50:57Z |
python
| 2020-12-15T15:06:10Z |
docs/docsite/rst/user_guide/vault.rst
|
.. _vault:
*************************************
Encrypting content with Ansible Vault
*************************************
Ansible Vault encrypts variables and files so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles. To use Ansible Vault you need one or more passwords to encrypt and decrypt content. If you store your vault passwords in a third-party tool such as a secret manager, you need a script to access them. Use the passwords with the :ref:`ansible-vault` command-line tool to create and view encrypted variables, create encrypted files, encrypt existing files, or edit, re-key, or decrypt files. You can then place encrypted content under source control and share it more safely.
.. warning::
* Encryption with Ansible Vault ONLY protects 'data at rest'. Once the content is decrypted ('data in use'), play and plugin authors are responsible for avoiding any secret disclosure, see :ref:`no_log <keep_secret_data>` for details on hiding output and :ref:`vault_securing_editor` for security considerations on editors you use with Ansible Vault.
You can use encrypted variables and files in ad-hoc commands and playbooks by supplying the passwords you used to encrypt them. You can modify your ``ansible.cfg`` file to specify the location of a password file or to always prompt for the password.
.. contents::
:local:
Managing vault passwords
========================
Managing your encrypted content is easier if you develop a strategy for managing your vault passwords. A vault password can be any string you choose. There is no special command to create a vault password. However, you need to keep track of your vault passwords. Each time you encrypt a variable or file with Ansible Vault, you must provide a password. When you use an encrypted variable or file in a command or playbook, you must provide the same password that was used to encrypt it. To develop a strategy for managing vault passwords, start with two questions:
* Do you want to encrypt all your content with the same password, or use different passwords for different needs?
* Where do you want to store your password or passwords?
Choosing between a single password and multiple passwords
---------------------------------------------------------
If you have a small team or few sensitive values, you can use a single password for everything you encrypt with Ansible Vault. Store your vault password securely in a file or a secret manager as described below.
If you have a larger team or many sensitive values, you can use multiple passwords. For example, you can use different passwords for different users or different levels of access. Depending on your needs, you might want a different password for each encrypted file, for each directory, or for each environment. For example, you might have a playbook that includes two vars files, one for the dev environment and one for the production environment, encrypted with two different passwords. When you run the playbook, select the correct vault password for the environment you are targeting, using a vault ID.
.. _vault_ids:
Managing multiple passwords with vault IDs
------------------------------------------
If you use multiple vault passwords, you can differentiate one password from another with vault IDs. You use the vault ID in three ways:
* Pass it with :option:`--vault-id <ansible-playbook --vault-id>` to the :ref:`ansible-vault` command when you create encrypted content
* Include it wherever you store the password for that vault ID (see :ref:`storing_vault_passwords`)
* Pass it with :option:`--vault-id <ansible-playbook --vault-id>` to the :ref:`ansible-playbook` command when you run a playbook that uses content you encrypted with that vault ID
When you pass a vault ID as an option to the :ref:`ansible-vault` command, you add a label (a hint or nickname) to the encrypted content. This label documents which password you used to encrypt it. The encrypted variable or file includes the vault ID label in plain text in the header. The vault ID is the last element before the encrypted content. For example::
my_encrytped_var: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
In addition to the label, you must provide a source for the related password. The source can be a prompt, a file, or a script, depending on how you are storing your vault passwords. The pattern looks like this:
.. code-block:: bash
--vault-id label@source
If your playbook uses multiple encrypted variables or files that you encrypted with different passwords, you must pass the vault IDs when you run that playbook. You can use :option:`--vault-id <ansible-playbook --vault-id>` by itself, with :option:`--vault-password-file <ansible-playbook --vault-password-file>`, or with :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>`. The pattern is the same as when you create encrypted content: include the label and the source for the matching password.
See below for examples of encrypting content with vault IDs and using content encrypted with vault IDs. The :option:`--vault-id <ansible-playbook --vault-id>` option works with any Ansible command that interacts with vaults, including :ref:`ansible-vault`, :ref:`ansible-playbook`, and so on.
Limitations of vault IDs
^^^^^^^^^^^^^^^^^^^^^^^^
Ansible does not enforce using the same password every time you use a particular vault ID label. You can encrypt different variables or files with the same vault ID label but different passwords. This usually happens when you type the password at a prompt and make a mistake. It is possible to use different passwords with the same vault ID label on purpose. For example, you could use each label as a reference to a class of passwords, rather than a single password. In this scenario, you must always know which specific password or file to use in context. However, you are more likely to encrypt two files with the same vault ID label and different passwords by mistake. If you encrypt two files with the same label but different passwords by accident, you can :ref:`rekey <rekeying_files>` one file to fix the issue.
Enforcing vault ID matching
^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default the vault ID label is only a hint to remind you which password you used to encrypt a variable or file. Ansible does not check that the vault ID in the header of the encrypted content matches the vault ID you provide when you use the content. Ansible decrypts all files and variables called by your command or playbook that are encrypted with the password you provide. To check the encrypted content and decrypt it only when the vault ID it contains matches the one you provide with ``--vault-id``, set the config option :ref:`DEFAULT_VAULT_ID_MATCH`. When you set :ref:`DEFAULT_VAULT_ID_MATCH`, each password is only used to decrypt data that was encrypted with the same label. This is efficient, predictable, and can reduce errors when different values are encrypted with different passwords.
.. note::
Even with the :ref:`DEFAULT_VAULT_ID_MATCH` setting enabled, Ansible does not enforce using the same password every time you use a particular vault ID label.
.. _storing_vault_passwords:
Storing and accessing vault passwords
-------------------------------------
You can memorize your vault password, or manually copy vault passwords from any source and paste them at a command-line prompt, but most users store them securely and access them as needed from within Ansible. You have two options for storing vault passwords that work from within Ansible: in files, or in a third-party tool such as the system keyring or a secret manager. If you store your passwords in a third-party tool, you need a vault password client script to retrieve them from within Ansible.
Storing passwords in files
^^^^^^^^^^^^^^^^^^^^^^^^^^
To store a vault password in a file, enter the password as a string on a single line in the file. Make sure the permissions on the file are appropriate. Do not add password files to source control. If you have multiple passwords, you can store them all in a single file, as long as they all have vault IDs. For each password, create a separate line and enter the vault ID, a space, then the password as a string. For example:
.. code-block:: text
dev my_dev_pass
test my_test_pass
prod my_prod_pass
.. _vault_password_client_scripts:
Storing passwords in third-party tools with vault password client scripts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can store your vault passwords on the system keyring, in a database, or in a secret manager and retrieve them from within Ansible using a vault password client script. Enter the password as a string on a single line. If your password has a vault ID, store it in a way that works with your password storage tool.
To create a vault password client script:
* Create a file with a name ending in ``-client.py``
* Make the file executable
* Within the script itself:
* Print the passwords to standard output
* Accept a ``--vault-id`` option
* If the script prompts for data (for example, a database password), send the prompts to standard error
When you run a playbook that uses vault passwords stored in a third-party tool, specify the script as the source within the ``--vault-id`` flag. For example:
.. code-block:: bash
ansible-playbook --vault-id dev@contrib/vault/vault-keyring-client.py
Ansible executes the client script with a ``--vault-id`` option so the script knows which vault ID label you specified. For example a script loading passwords from a secret manager can use the vault ID label to pick either the 'dev' or 'prod' password. The example command above results in the following execution of the client script:
.. code-block:: bash
contrib/vault/vault-keyring-client.py --vault-id dev
For an example of a client script that loads passwords from the system keyring, see :file:`contrib/vault/vault-keyring-client.py`.
Encrypting content with Ansible Vault
=====================================
Once you have a strategy for managing and storing vault passwords, you can start encrypting content. You can encrypt two types of content with Ansible Vault: variables and files. Encrypted content always includes the ``!vault`` tag, which tells Ansible and YAML that the content needs to be decrypted, and a ``|`` character, which allows multi-line strings. Encrypted content created with ``--vault-id`` also contains the vault ID label. For more details about the encryption process and the format of content encrypted with Ansible Vault, see :ref:`vault_format`. This table shows the main differences between encrypted variables and encrypted files:
.. table::
:class: documentation-table
====================== ================================= ====================================
.. Encrypted variables Encrypted files
====================== ================================= ====================================
How much is encrypted? Variables within a plaintext file The entire file
When is it decrypted? On demand, only when needed Whenever loaded or referenced [#f1]_
What can be encrypted? Only variables Any structured data file
====================== ================================= ====================================
.. [#f1] Ansible cannot know if it needs content from an encrypted file unless it decrypts the file, so it decrypts all encrypted files referenced in your playbooks and roles.
.. _encrypting_variables:
.. _single_encrypted_variable:
Encrypting individual variables with Ansible Vault
--------------------------------------------------
You can encrypt single values inside a YAML file using the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command. For one way to keep your vaulted variables safely visible, see :ref:`tip_for_variables_and_vaults`.
Advantages and disadvantages of encrypting variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
With variable-level encryption, your files are still easily legible. You can mix plaintext and encrypted variables, even inline in a play or role. However, password rotation is not as simple as with file-level encryption. You cannot :ref:`rekey <rekeying_files>` encrypted variables. Also, variable-level encryption only works on variables. If you want to encrypt tasks or other content, you must encrypt the entire file.
.. _encrypt_string_for_use_in_yaml:
Creating encrypted variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command encrypts and formats any string you type (or copy or generate) into a format that can be included in a playbook, role, or variables file. To create a basic encrypted variable, pass three options to the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command:
* a source for the vault password (prompt, file, or script, with or without a vault ID)
* the string to encrypt
* the string name (the name of the variable)
The pattern looks like this:
.. code-block:: bash
ansible-vault encrypt_string <password_source> '<string_to_encrypt>' --name '<string_name_of_variable>'
For example, to encrypt the string 'foobar' using the only password stored in 'a_password_file' and name the variable 'the_secret':
.. code-block:: bash
ansible-vault encrypt_string --vault-password-file a_password_file 'foobar' --name 'the_secret'
The command above creates this content::
the_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
63396265333966386166373632626539326166353965363262633030333630313338646335303630
3438626666666137650a353638643435666633633964366338633066623234616432373231333331
6564
To encrypt the string 'foooodev', add the vault ID label 'dev' with the 'dev' vault password stored in 'a_password_file', and call the encrypted variable 'the_dev_secret':
.. code-block:: bash
ansible-vault encrypt_string --vault-id dev@a_password_file 'foooodev' --name 'the_dev_secret'
The command above creates this content::
the_dev_secret: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
To encrypt the string 'letmein' read from stdin, add the vault ID 'test' using the 'test' vault password stored in `a_password_file`, and name the variable 'test_db_password':
.. code-block:: bash
echo -n 'letmein' | ansible-vault encrypt_string --vault-id test@a_password_file --stdin-name 'test_db_password'
.. warning::
Typing secret content directly at the command line (without a prompt) leaves the secret string in your shell history. Do not do this outside of testing.
The command above creates this output::
Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a new line)
db_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
61323931353866666336306139373937316366366138656131323863373866376666353364373761
3539633234313836346435323766306164626134376564330a373530313635343535343133316133
36643666306434616266376434363239346433643238336464643566386135356334303736353136
6565633133366366360a326566323363363936613664616364623437336130623133343530333739
3039
To be prompted for a string to encrypt, encrypt it with the 'dev' vault password from 'a_password_file', name the variable 'new_user_password' and give it the vault ID label 'dev':
.. code-block:: bash
ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'new_user_password'
The command above triggers this prompt:
.. code-block:: text
Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a new line)
Type the string to encrypt (for example, 'hunter2'), hit ctrl-d, and wait.
.. warning::
Do not press ``Enter`` after supplying the string to encrypt. That will add a newline to the encrypted value.
The sequence above creates this output::
new_user_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
37636561366636643464376336303466613062633537323632306566653533383833366462366662
6565353063303065303831323539656138653863353230620a653638643639333133306331336365
62373737623337616130386137373461306535383538373162316263386165376131623631323434
3866363862363335620a376466656164383032633338306162326639643635663936623939666238
3161
You can add the output from any of the examples above to any playbook, variables file, or role for future use. Encrypted variables are larger than plain-text variables, but they protect your sensitive content while leaving the rest of the playbook, variables file, or role in plain text so you can easily read it.
Viewing encrypted variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can view the original value of an encrypted variable using the debug module. You must pass the password that was used to encrypt the variable. For example, if you stored the variable created by the last example above in a file called 'vars.yml', you could view the unencrypted value of that variable like this:
.. code-block:: console
ansible localhost -m ansible.builtin.debug -a var="new_user_password" -e "@vars.yml" --vault-id dev@a_password_file
localhost | SUCCESS => {
"new_user_password": "hunter2"
}
Encrypting files with Ansible Vault
-----------------------------------
Ansible Vault can encrypt any structured data file used by Ansible, including:
* group variables files from inventory
* host variables files from inventory
* variables files passed to ansible-playbook with ``-e @file.yml`` or ``-e @file.json``
* variables files loaded by ``include_vars`` or ``vars_files``
* variables files in roles
* defaults files in roles
* tasks files
* handlers files
* binary files or other arbitrary files
The full file is encrypted in the vault.
.. note::
Ansible Vault uses an editor to create or modify encrypted files. See :ref:`vault_securing_editor` for some guidance on securing the editor.
Advantages and disadvantages of encrypting files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File-level encryption is easy to use. Password rotation for encrypted files is straightforward with the :ref:`rekey <rekeying_files>` command. Encrypting files can hide not only sensitive values, but the names of the variables you use. However, with file-level encryption the contents of files are no longer easy to access and read. This may be a problem with encrypted tasks files. When encrypting a variables file, see :ref:`tip_for_variables_and_vaults` for one way to keep references to these variables in a non-encrypted file. Ansible always decrypts the entire encrypted file when it is when loaded or referenced, because Ansible cannot know if it needs the content unless it decrypts it.
.. _creating_files:
Creating encrypted files
^^^^^^^^^^^^^^^^^^^^^^^^
To create a new encrypted data file called 'foo.yml' with the 'test' vault password from 'multi_password_file':
.. code-block:: bash
ansible-vault create --vault-id test@multi_password_file foo.yml
The tool launches an editor (whatever editor you have defined with $EDITOR, default editor is vi). Add the content. When you close the editor session, the file is saved as encrypted data. The file header reflects the vault ID used to create it:
.. code-block:: text
``$ANSIBLE_VAULT;1.2;AES256;test``
To create a new encrypted data file with the vault ID 'my_new_password' assigned to it and be prompted for the password:
.. code-block:: bash
ansible-vault create --vault-id my_new_password@prompt foo.yml
Again, add content to the file in the editor and save. Be sure to store the new password you created at the prompt, so you can find it when you want to decrypt that file.
.. _encrypting_files:
Encrypting existing files
^^^^^^^^^^^^^^^^^^^^^^^^^
To encrypt an existing file, use the :ref:`ansible-vault encrypt <ansible_vault_encrypt>` command. This command can operate on multiple files at once. For example:
.. code-block:: bash
ansible-vault encrypt foo.yml bar.yml baz.yml
To encrypt existing files with the 'project' ID and be prompted for the password:
.. code-block:: bash
ansible-vault encrypt --vault-id project@prompt foo.yml bar.yml baz.yml
.. _viewing_files:
Viewing encrypted files
^^^^^^^^^^^^^^^^^^^^^^^
To view the contents of an encrypted file without editing it, you can use the :ref:`ansible-vault view <ansible_vault_view>` command:
.. code-block:: bash
ansible-vault view foo.yml bar.yml baz.yml
.. _editing_encrypted_files:
Editing encrypted files
^^^^^^^^^^^^^^^^^^^^^^^
To edit an encrypted file in place, use the :ref:`ansible-vault edit <ansible_vault_edit>` command. This command decrypts the file to a temporary file, allows you to edit the content, then saves and re-encrypts the content and removes the temporary file when you close the editor. For example:
.. code-block:: bash
ansible-vault edit foo.yml
To edit a file encrypted with the ``vault2`` password file and assigned the vault ID ``pass2``:
.. code-block:: bash
ansible-vault edit --vault-id pass2@vault2 foo.yml
.. _rekeying_files:
Changing the password and/or vault ID on encrypted files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To change the password on an encrypted file or files, use the :ref:`rekey <ansible_vault_rekey>` command:
.. code-block:: bash
ansible-vault rekey foo.yml bar.yml baz.yml
This command can rekey multiple data files at once and will ask for the original password and also the new password. To set a different ID for the rekeyed files, pass the new ID to ``--new-vault-id``. For example, to rekey a list of files encrypted with the 'preprod1' vault ID from the 'ppold' file to the 'preprod2' vault ID and be prompted for the new password:
.. code-block:: bash
ansible-vault rekey --vault-id preprod1@ppold --new-vault-id preprod2@prompt foo.yml bar.yml baz.yml
.. _decrypting_files:
Decrypting encrypted files
^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have an encrypted file that you no longer want to keep encrypted, you can permanently decrypt it by running the :ref:`ansible-vault decrypt <ansible_vault_decrypt>` command. This command will save the file unencrypted to the disk, so be sure you do not want to :ref:`edit <ansible_vault_edit>` it instead.
.. code-block:: bash
ansible-vault decrypt foo.yml bar.yml baz.yml
.. _vault_securing_editor:
Steps to secure your editor
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ansible Vault relies on your configured editor, which can be a source of disclosures. Most editors have ways to prevent loss of data, but these normally rely on extra plain text files that can have a clear text copy of your secrets. Consult your editor documentation to configure the editor to avoid disclosing secure data. The following sections provide some guidance on common editors but should not be taken as a complete guide to securing your editor.
vim
...
You can set the following ``vim`` options in command mode to avoid cases of disclosure. There may be more settings you need to modify to ensure security, especially when using plugins, so consult the ``vim`` documentation.
1. Disable swapfiles that act like an autosave in case of crash or interruption.
.. code-block:: text
set noswapfile
2. Disable creation of backup files.
.. code-block:: text
set nobackup
set nowritebackup
3. Disable the viminfo file from copying data from your current session.
.. code-block:: text
set viminfo=
4. Disable copying to the system clipboard.
.. code-block:: text
set clipboard=
You can optionally add these settings in ``.vimrc`` for all files, or just specific paths or extensions. See the ``vim`` manual for details.
Emacs
......
You can set the following Emacs options to avoid cases of disclosure. There may be more settings you need to modify to ensure security, especially when using plugins, so consult the Emacs documentation.
1. Do not copy data to the system clipboard.
.. code-block:: text
(setq x-select-enable-clipboard nil)
2. Disable creation of backup files.
.. code-block:: text
(setq make-backup-files nil)
3. Disable autosave files.
.. code-block:: text
(setq auto-save-default nil)
.. _playbooks_vault:
.. _providing_vault_passwords:
Using encrypted variables and files
===================================
When you run a task or playbook that uses encrypted variables or files, you must provide the passwords to decrypt the variables or files. You can do this at the command line or in the playbook itself.
Passing a single password
-------------------------
If all the encrypted variables and files your task or playbook needs use a single password, you can use the :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or :option:`--vault-password-file <ansible-playbook --vault-password-file>` cli options.
To prompt for the password:
.. code-block:: bash
ansible-playbook --ask-vault-pass site.yml
To retrieve the password from the :file:`/path/to/my/vault-password-file` file:
.. code-block:: bash
ansible-playbook --vault-password-file /path/to/my/vault-password-file site.yml
To get the password from the vault password client script :file:`my-vault-password-client.py`:
.. code-block:: bash
ansible-playbook --vault-password-file my-vault-password-client.py
.. _specifying_vault_ids:
Passing vault IDs
-----------------
You can also use the :option:`--vault-id <ansible-playbook --vault-id>` option to pass a single password with its vault label. This approach is clearer when multiple vaults are used within a single inventory.
To prompt for the password for the 'dev' vault ID:
.. code-block:: bash
ansible-playbook --vault-id dev@prompt site.yml
To retrieve the password for the 'dev' vault ID from the :file:`dev-password` file:
.. code-block:: bash
ansible-playbook --vault-id dev@dev-password site.yml
To get the password for the 'dev' vault ID from the vault password client script :file:`my-vault-password-client.py`:
.. code-block:: bash
ansible-playbook --vault-id [email protected]
Passing multiple vault passwords
--------------------------------
If your task or playbook requires multiple encrypted variables or files that you encrypted with different vault IDs, you must use the :option:`--vault-id <ansible-playbook --vault-id>` option, passing multiple ``--vault-id`` options to specify the vault IDs ('dev', 'prod', 'cloud', 'db') and sources for the passwords (prompt, file, script). . For example, to use a 'dev' password read from a file and to be prompted for the 'prod' password:
.. code-block:: bash
ansible-playbook --vault-id dev@dev-password --vault-id prod@prompt site.yml
By default the vault ID labels (dev, prod and so on) are only hints. Ansible attempts to decrypt vault content with each password. The password with the same label as the encrypted data will be tried first, after that each vault secret will be tried in the order they were provided on the command line.
Where the encrypted data has no label, or the label does not match any of the provided labels, the passwords will be tried in the order they are specified. In the example above, the 'dev' password will be tried first, then the 'prod' password for cases where Ansible doesn't know which vault ID is used to encrypt something.
Using ``--vault-id`` without a vault ID
---------------------------------------
The :option:`--vault-id <ansible-playbook --vault-id>` option can also be used without specifying a vault-id. This behavior is equivalent to :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or :option:`--vault-password-file <ansible-playbook --vault-password-file>` so is rarely used.
For example, to use a password file :file:`dev-password`:
.. code-block:: bash
ansible-playbook --vault-id dev-password site.yml
To prompt for the password:
.. code-block:: bash
ansible-playbook --vault-id @prompt site.yml
To get the password from an executable script :file:`my-vault-password-client.py`:
.. code-block:: bash
ansible-playbook --vault-id my-vault-password-client.py
Configuring defaults for using encrypted content
================================================
Setting a default vault ID
--------------------------
If you use one vault ID more frequently than any other, you can set the config option :ref:`DEFAULT_VAULT_IDENTITY_LIST` to specify a default vault ID and password source. Ansible will use the default vault ID and source any time you do not specify :option:`--vault-id <ansible-playbook --vault-id>`. You can set multiple values for this option. Setting multiple values is equivalent to passing multiple :option:`--vault-id <ansible-playbook --vault-id>` cli options.
Setting a default password source
---------------------------------
If you use one vault password file more frequently than any other, you can set the :ref:`DEFAULT_VAULT_PASSWORD_FILE` config option or the :envvar:`ANSIBLE_VAULT_PASSWORD_FILE` environment variable to specify that file. For example, if you set ``ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt``, Ansible will automatically search for the password in that file. This is useful if, for example, you use Ansible from a continuous integration system such as Jenkins.
When are encrypted files made visible?
======================================
In general, content you encrypt with Ansible Vault remains encrypted after execution. However, there is one exception. If you pass an encrypted file as the ``src`` argument to the :ref:`copy <copy_module>`, :ref:`template <template_module>`, :ref:`unarchive <unarchive_module>`, :ref:`script <script_module>` or :ref:`assemble <assemble_module>` module, the file will not be encrypted on the target host (assuming you supply the correct vault password when you run the play). This behavior is intended and useful. You can encrypt a configuration file or template to avoid sharing the details of your configuration, but when you copy that configuration to servers in your environment, you want it to be decrypted so local users and processes can access it.
.. _speeding_up_vault:
Speeding up Ansible Vault
=========================
If you have many encrypted files, decrypting them at startup may cause a perceptible delay. To speed this up, install the cryptography package:
.. code-block:: bash
pip install cryptography
.. _vault_format:
Format of files encrypted with Ansible Vault
============================================
Ansible Vault creates UTF-8 encoded txt files. The file format includes a newline terminated header. For example::
$ANSIBLE_VAULT;1.1;AES256
or::
$ANSIBLE_VAULT;1.2;AES256;vault-id-label
The header contains up to four elements, separated by semi-colons (``;``).
1. The format ID (``$ANSIBLE_VAULT``). Currently ``$ANSIBLE_VAULT`` is the only valid format ID. The format ID identifies content that is encrypted with Ansible Vault (via vault.is_encrypted_file()).
2. The vault format version (``1.X``). All supported versions of Ansible will currently default to '1.1' or '1.2' if a labeled vault ID is supplied. The '1.0' format is supported for reading only (and will be converted automatically to the '1.1' format on write). The format version is currently used as an exact string compare only (version numbers are not currently 'compared').
3. The cipher algorithm used to encrypt the data (``AES256``). Currently ``AES256`` is the only supported cipher algorithm. Vault format 1.0 used 'AES', but current code always uses 'AES256'.
4. The vault ID label used to encrypt the data (optional, ``vault-id-label``) For example, if you encrypt a file with ``--vault-id dev@prompt``, the vault-id-label is ``dev``.
Note: In the future, the header could change. Fields after the format ID and format version depend on the format version, and future vault format versions may add more cipher algorithm options and/or additional fields.
The rest of the content of the file is the 'vaulttext'. The vaulttext is a text armored version of the
encrypted ciphertext. Each line is 80 characters wide, except for the last line which may be shorter.
Ansible Vault payload format 1.1 - 1.2
--------------------------------------
The vaulttext is a concatenation of the ciphertext and a SHA256 digest with the result 'hexlifyied'.
'hexlify' refers to the ``hexlify()`` method of the Python Standard Library's `binascii <https://docs.python.org/3/library/binascii.html>`_ module.
hexlify()'ed result of:
- hexlify()'ed string of the salt, followed by a newline (``0x0a``)
- hexlify()'ed string of the crypted HMAC, followed by a newline. The HMAC is:
- a `RFC2104 <https://www.ietf.org/rfc/rfc2104.txt>`_ style HMAC
- inputs are:
- The AES256 encrypted ciphertext
- A PBKDF2 key. This key, the cipher key, and the cipher IV are generated from:
- the salt, in bytes
- 10000 iterations
- SHA256() algorithm
- the first 32 bytes are the cipher key
- the second 32 bytes are the HMAC key
- remaining 16 bytes are the cipher IV
- hexlify()'ed string of the ciphertext. The ciphertext is:
- AES256 encrypted data. The data is encrypted using:
- AES-CTR stream cipher
- cipher key
- IV
- a 128 bit counter block seeded from an integer IV
- the plaintext
- the original plaintext
- padding up to the AES256 blocksize. (The data used for padding is based on `RFC5652 <https://tools.ietf.org/html/rfc5652#section-6.3>`_)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
changelogs/fragments/ansible-test-pylint-python-3.8-3.9.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/filter/check_pylint.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/sanity/ignore.txt
|
plugins/modules/bad.py import
plugins/modules/bad.py pylint:ansible-bad-module-import
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
test/lib/ansible_test/_data/requirements/constraints.txt
|
coverage >= 4.5.1, < 5.0.0 ; python_version < '3.7' # coverage 4.4 required for "disable_warnings" support but 4.5.1 needed for bug fixes, coverage 5.0+ incompatible
coverage >= 4.5.2, < 5.0.0 ; python_version == '3.7' # coverage 4.5.2 fixes bugs in support for python 3.7, coverage 5.0+ incompatible
coverage >= 4.5.4, < 5.0.0 ; python_version > '3.7' # coverage had a bug in < 4.5.4 that would cause unit tests to hang in Python 3.8, coverage 5.0+ incompatible
cryptography < 2.2 ; python_version < '2.7' # cryptography 2.2 drops support for python 2.6
# do not add a cryptography constraint here unless it is for python version incompatibility, see the get_cryptography_requirement function in executor.py for details
deepdiff < 4.0.0 ; python_version < '3' # deepdiff 4.0.0 and later require python 3
jinja2 < 2.11 ; python_version < '2.7' # jinja2 2.11 and later require python 2.7 or later
urllib3 < 1.24 ; python_version < '2.7' # urllib3 1.24 and later require python 2.7 or later
pywinrm >= 0.3.0 # message encryption support
sphinx < 1.6 ; python_version < '2.7' # sphinx 1.6 and later require python 2.7 or later
sphinx < 1.8 ; python_version >= '2.7' # sphinx 1.8 and later are currently incompatible with rstcheck 3.3
pygments >= 2.4.0 # Pygments 2.4.0 includes bugfixes for YAML and YAML+Jinja lexers
wheel < 0.30.0 ; python_version < '2.7' # wheel 0.30.0 and later require python 2.7 or later
yamllint != 1.8.0, < 1.14.0 ; python_version < '2.7' # yamllint 1.8.0 and 1.14.0+ require python 2.7+
pycrypto >= 2.6 # Need features found in 2.6 and greater
ncclient >= 0.5.2 # Need features added in 0.5.2 and greater
idna < 2.6, >= 2.5 # linode requires idna < 2.9, >= 2.5, requests requires idna < 2.6, but cryptography will cause the latest version to be installed instead
paramiko < 2.4.0 ; python_version < '2.7' # paramiko 2.4.0 drops support for python 2.6
pytest < 3.3.0 ; python_version < '2.7' # pytest 3.3.0 drops support for python 2.6
pytest < 5.0.0 ; python_version == '2.7' # pytest 5.0.0 and later will no longer support python 2.7
pytest-forked < 1.0.2 ; python_version < '2.7' # pytest-forked 1.0.2 and later require python 2.7 or later
pytest-forked >= 1.0.2 ; python_version >= '2.7' # pytest-forked before 1.0.2 does not work with pytest 4.2.0+ (which requires python 2.7+)
ntlm-auth >= 1.3.0 # message encryption support using cryptography
requests < 2.20.0 ; python_version < '2.7' # requests 2.20.0 drops support for python 2.6
requests-ntlm >= 1.1.0 # message encryption support
requests-credssp >= 0.1.0 # message encryption support
voluptuous >= 0.11.0 # Schema recursion via Self
openshift >= 0.6.2, < 0.9.0 # merge_type support
virtualenv < 16.0.0 ; python_version < '2.7' # virtualenv 16.0.0 and later require python 2.7 or later
pathspec < 0.6.0 ; python_version < '2.7' # pathspec 0.6.0 and later require python 2.7 or later
pyopenssl < 18.0.0 ; python_version < '2.7' # pyOpenSSL 18.0.0 and later require python 2.7 or later
pyparsing < 3.0.0 ; python_version < '3.5' # pyparsing 3 and later require python 3.5 or later
pyfmg == 0.6.1 # newer versions do not pass current unit tests
pyyaml < 5.1 ; python_version < '2.7' # pyyaml 5.1 and later require python 2.7 or later
pycparser < 2.19 ; python_version < '2.7' # pycparser 2.19 and later require python 2.7 or later
mock >= 2.0.0 # needed for features backported from Python 3.6 unittest.mock (assert_called, assert_called_once...)
pytest-mock >= 1.4.0 # needed for mock_use_standalone_module pytest option
xmltodict < 0.12.0 ; python_version < '2.7' # xmltodict 0.12.0 and later require python 2.7 or later
lxml < 4.3.0 ; python_version < '2.7' # lxml 4.3.0 and later require python 2.7 or later
pyvmomi < 6.0.0 ; python_version < '2.7' # pyvmomi 6.0.0 and later require python 2.7 or later
pyone == 1.1.9 # newer versions do not pass current integration tests
boto3 < 1.11 ; python_version < '2.7' # boto3 1.11 drops Python 2.6 support
botocore >= 1.10.0, < 1.14 ; python_version < '2.7' # adds support for the following AWS services: secretsmanager, fms, and acm-pca; botocore 1.14 drops Python 2.6 support
botocore >= 1.10.0 ; python_version >= '2.7' # adds support for the following AWS services: secretsmanager, fms, and acm-pca
setuptools < 37 ; python_version == '2.6' # setuptools 37 and later require python 2.7 or later
setuptools < 45 ; python_version == '2.7' # setuptools 45 and later require python 3.5 or later
gssapi < 1.6.0 ; python_version <= '2.7' # gssapi 1.6.0 and later require python 3 or later
# freeze antsibull-changelog for consistent test results
antsibull-changelog == 0.7.0
# Make sure we have a new enough antsibull for the CLI args we use
antsibull >= 0.21.0
# freeze pylint and its requirements for consistent test results
astroid == 2.2.5
isort == 4.3.15
lazy-object-proxy == 1.3.1
mccabe == 0.6.1
pylint == 2.3.1
typed-ast == 1.4.0 # 1.4.0 is required to compile on Python 3.8
wrapt == 1.11.1
# freeze pycodestyle for consistent test results
pycodestyle == 2.6.0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
test/lib/ansible_test/_data/requirements/sanity.pylint.txt
|
pylint ; python_version < '3.9' # installation fails on python 3.9.0b1
pyyaml # needed for collection_detail.py
mccabe # pylint complexity testing
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
test/lib/ansible_test/_internal/sanity/pylint.py
|
"""Sanity test using pylint."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import itertools
import json
import os
import datetime
from .. import types as t
from ..sanity import (
SanitySingleVersion,
SanityMessage,
SanityFailure,
SanitySuccess,
SANITY_ROOT,
)
from ..target import (
TestTarget,
)
from ..util import (
SubprocessError,
display,
ConfigParser,
is_subdir,
find_python,
)
from ..util_common import (
run_command,
)
from ..ansible_util import (
ansible_environment,
get_collection_detail,
CollectionDetail,
CollectionDetailError,
)
from ..config import (
SanityConfig,
)
from ..data import (
data_context,
)
class PylintTest(SanitySingleVersion):
"""Sanity test using pylint."""
def __init__(self):
super(PylintTest, self).__init__()
self.optional_error_codes.update([
'ansible-deprecated-date',
'too-complex',
])
@property
def error_code(self): # type: () -> t.Optional[str]
"""Error code for ansible-test matching the format used by the underlying test program, or None if the program does not use error codes."""
return 'ansible-test'
def filter_targets(self, targets): # type: (t.List[TestTarget]) -> t.List[TestTarget]
"""Return the given list of test targets, filtered to include only those relevant for the test."""
return [target for target in targets if os.path.splitext(target.path)[1] == '.py' or is_subdir(target.path, 'bin')]
def test(self, args, targets, python_version):
"""
:type args: SanityConfig
:type targets: SanityTargets
:type python_version: str
:rtype: TestResult
"""
plugin_dir = os.path.join(SANITY_ROOT, 'pylint', 'plugins')
plugin_names = sorted(p[0] for p in [
os.path.splitext(p) for p in os.listdir(plugin_dir)] if p[1] == '.py' and p[0] != '__init__')
settings = self.load_processor(args)
paths = [target.path for target in targets.include]
module_paths = [os.path.relpath(p, data_context().content.module_path).split(os.path.sep) for p in
paths if is_subdir(p, data_context().content.module_path)]
module_dirs = sorted(set([p[0] for p in module_paths if len(p) > 1]))
large_module_group_threshold = 500
large_module_groups = [key for key, value in
itertools.groupby(module_paths, lambda p: p[0] if len(p) > 1 else '') if len(list(value)) > large_module_group_threshold]
large_module_group_paths = [os.path.relpath(p, data_context().content.module_path).split(os.path.sep) for p in paths
if any(is_subdir(p, os.path.join(data_context().content.module_path, g)) for g in large_module_groups)]
large_module_group_dirs = sorted(set([os.path.sep.join(p[:2]) for p in large_module_group_paths if len(p) > 2]))
contexts = []
remaining_paths = set(paths)
def add_context(available_paths, context_name, context_filter):
"""
:type available_paths: set[str]
:type context_name: str
:type context_filter: (str) -> bool
"""
filtered_paths = set(p for p in available_paths if context_filter(p))
contexts.append((context_name, sorted(filtered_paths)))
available_paths -= filtered_paths
def filter_path(path_filter=None):
"""
:type path_filter: str
:rtype: (str) -> bool
"""
def context_filter(path_to_filter):
"""
:type path_to_filter: str
:rtype: bool
"""
return is_subdir(path_to_filter, path_filter)
return context_filter
for large_module_dir in large_module_group_dirs:
add_context(remaining_paths, 'modules/%s' % large_module_dir, filter_path(os.path.join(data_context().content.module_path, large_module_dir)))
for module_dir in module_dirs:
add_context(remaining_paths, 'modules/%s' % module_dir, filter_path(os.path.join(data_context().content.module_path, module_dir)))
add_context(remaining_paths, 'modules', filter_path(data_context().content.module_path))
add_context(remaining_paths, 'module_utils', filter_path(data_context().content.module_utils_path))
add_context(remaining_paths, 'units', filter_path(data_context().content.unit_path))
if data_context().content.collection:
add_context(remaining_paths, 'collection', lambda p: True)
else:
add_context(remaining_paths, 'validate-modules', filter_path('test/lib/ansible_test/_data/sanity/validate-modules/'))
add_context(remaining_paths, 'validate-modules-unit', filter_path('test/lib/ansible_test/tests/validate-modules-unit/'))
add_context(remaining_paths, 'sanity', filter_path('test/lib/ansible_test/_data/sanity/'))
add_context(remaining_paths, 'ansible-test', filter_path('test/lib/'))
add_context(remaining_paths, 'test', filter_path('test/'))
add_context(remaining_paths, 'hacking', filter_path('hacking/'))
add_context(remaining_paths, 'ansible', lambda p: True)
messages = []
context_times = []
python = find_python(python_version)
collection_detail = None
if data_context().content.collection:
try:
collection_detail = get_collection_detail(args, python)
if not collection_detail.version:
display.warning('Skipping pylint collection version checks since no collection version was found.')
except CollectionDetailError as ex:
display.warning('Skipping pylint collection version checks since collection detail loading failed: %s' % ex.reason)
test_start = datetime.datetime.utcnow()
for context, context_paths in sorted(contexts):
if not context_paths:
continue
context_start = datetime.datetime.utcnow()
messages += self.pylint(args, context, context_paths, plugin_dir, plugin_names, python, collection_detail)
context_end = datetime.datetime.utcnow()
context_times.append('%s: %d (%s)' % (context, len(context_paths), context_end - context_start))
test_end = datetime.datetime.utcnow()
for context_time in context_times:
display.info(context_time, verbosity=4)
display.info('total: %d (%s)' % (len(paths), test_end - test_start), verbosity=4)
errors = [SanityMessage(
message=m['message'].replace('\n', ' '),
path=m['path'],
line=int(m['line']),
column=int(m['column']),
level=m['type'],
code=m['symbol'],
) for m in messages]
if args.explain:
return SanitySuccess(self.name)
errors = settings.process_errors(errors, paths)
if errors:
return SanityFailure(self.name, messages=errors)
return SanitySuccess(self.name)
@staticmethod
def pylint(
args, # type: SanityConfig
context, # type: str
paths, # type: t.List[str]
plugin_dir, # type: str
plugin_names, # type: t.List[str]
python, # type: str
collection_detail, # type: CollectionDetail
): # type: (...) -> t.List[t.Dict[str, str]]
"""Run pylint using the config specified by the context on the specified paths."""
rcfile = os.path.join(SANITY_ROOT, 'pylint', 'config', context.split('/')[0] + '.cfg')
if not os.path.exists(rcfile):
if data_context().content.collection:
rcfile = os.path.join(SANITY_ROOT, 'pylint', 'config', 'collection.cfg')
else:
rcfile = os.path.join(SANITY_ROOT, 'pylint', 'config', 'default.cfg')
parser = ConfigParser()
parser.read(rcfile)
if parser.has_section('ansible-test'):
config = dict(parser.items('ansible-test'))
else:
config = dict()
disable_plugins = set(i.strip() for i in config.get('disable-plugins', '').split(',') if i)
load_plugins = set(plugin_names + ['pylint.extensions.mccabe']) - disable_plugins
cmd = [
python,
'-m', 'pylint',
'--jobs', '0',
'--reports', 'n',
'--max-line-length', '160',
'--max-complexity', '20',
'--rcfile', rcfile,
'--output-format', 'json',
'--load-plugins', ','.join(load_plugins),
] + paths
if data_context().content.collection:
cmd.extend(['--collection-name', data_context().content.collection.full_name])
if collection_detail and collection_detail.version:
cmd.extend(['--collection-version', collection_detail.version])
append_python_path = [plugin_dir]
if data_context().content.collection:
append_python_path.append(data_context().content.collection.root)
env = ansible_environment(args)
env['PYTHONPATH'] += os.path.pathsep + os.path.pathsep.join(append_python_path)
# expose plugin paths for use in custom plugins
env.update(dict(('ANSIBLE_TEST_%s_PATH' % k.upper(), os.path.abspath(v) + os.path.sep) for k, v in data_context().content.plugin_paths.items()))
if paths:
display.info('Checking %d file(s) in context "%s" with config: %s' % (len(paths), context, rcfile), verbosity=1)
try:
stdout, stderr = run_command(args, cmd, env=env, capture=True)
status = 0
except SubprocessError as ex:
stdout = ex.stdout
stderr = ex.stderr
status = ex.status
if stderr or status >= 32:
raise SubprocessError(cmd=cmd, status=status, stderr=stderr, stdout=stdout)
else:
stdout = None
if not args.explain and stdout:
messages = json.loads(stdout)
else:
messages = []
return messages
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,854 |
ansible-test sanity --requirements fails to install pylint
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test sanity --requirements fails to install pylint, which is needed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Adding ``--requirements`` is supposed to install missing requirements, but fails to install pylint.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
See documentation for help: https://docs.ansible.com/ansible/2.10/dev_guide/testing/sanity/pep8.html
Running sanity test 'pslint'
Running sanity test 'pylint' with Python 3.9
ERROR: Command "/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9 -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible_test/_data/sanity/pylint/config/collection.cfg --output-format json --load-plugins string_format,blacklist,deprecated,pylint.extensions.mccabe tests/unit/test_example.py --collection-name pycontribs.protogen --collection-version 0.0.1" returned exit status 1.
>>> Standard Error
/Users/ssbarnea/.pyenv/versions/3.9.0/bin/python3.9: No module named pylint
```
|
https://github.com/ansible/ansible/issues/72854
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-12-04T13:00:47Z |
python
| 2020-12-15T18:27:32Z |
test/sanity/ignore.txt
|
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/my_test.py shebang # example module but not in a normal module location
examples/scripts/my_test_facts.py shebang # example module but not in a normal module location
examples/scripts/my_test_info.py shebang # example module but not in a normal module location
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.6!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.7!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-3.5!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.6!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.7!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-3.5!skip # release and docs process only, 3.6+ required
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/galaxy/collection/__init__.py compile-2.6!skip # 'ansible-galaxy collection' requires 2.7+
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate # testing Python 2.x implicit relative imports
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/constraints.txt test-constraints
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py metaclass-boilerplate
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py future-import-boilerplate # test expects no boilerplate
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py metaclass-boilerplate # test expects no boilerplate
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
test/utils/shippable/check_matrix.py replace-urlopen
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
changelogs/fragments/ansible-test-pylint-python-3.8-3.9.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/filter/check_pylint.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/sanity/ignore.txt
|
plugins/modules/bad.py import
plugins/modules/bad.py pylint:ansible-bad-module-import
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import
tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
test/lib/ansible_test/_data/requirements/constraints.txt
|
coverage >= 4.5.1, < 5.0.0 ; python_version < '3.7' # coverage 4.4 required for "disable_warnings" support but 4.5.1 needed for bug fixes, coverage 5.0+ incompatible
coverage >= 4.5.2, < 5.0.0 ; python_version == '3.7' # coverage 4.5.2 fixes bugs in support for python 3.7, coverage 5.0+ incompatible
coverage >= 4.5.4, < 5.0.0 ; python_version > '3.7' # coverage had a bug in < 4.5.4 that would cause unit tests to hang in Python 3.8, coverage 5.0+ incompatible
cryptography < 2.2 ; python_version < '2.7' # cryptography 2.2 drops support for python 2.6
# do not add a cryptography constraint here unless it is for python version incompatibility, see the get_cryptography_requirement function in executor.py for details
deepdiff < 4.0.0 ; python_version < '3' # deepdiff 4.0.0 and later require python 3
jinja2 < 2.11 ; python_version < '2.7' # jinja2 2.11 and later require python 2.7 or later
urllib3 < 1.24 ; python_version < '2.7' # urllib3 1.24 and later require python 2.7 or later
pywinrm >= 0.3.0 # message encryption support
sphinx < 1.6 ; python_version < '2.7' # sphinx 1.6 and later require python 2.7 or later
sphinx < 1.8 ; python_version >= '2.7' # sphinx 1.8 and later are currently incompatible with rstcheck 3.3
pygments >= 2.4.0 # Pygments 2.4.0 includes bugfixes for YAML and YAML+Jinja lexers
wheel < 0.30.0 ; python_version < '2.7' # wheel 0.30.0 and later require python 2.7 or later
yamllint != 1.8.0, < 1.14.0 ; python_version < '2.7' # yamllint 1.8.0 and 1.14.0+ require python 2.7+
pycrypto >= 2.6 # Need features found in 2.6 and greater
ncclient >= 0.5.2 # Need features added in 0.5.2 and greater
idna < 2.6, >= 2.5 # linode requires idna < 2.9, >= 2.5, requests requires idna < 2.6, but cryptography will cause the latest version to be installed instead
paramiko < 2.4.0 ; python_version < '2.7' # paramiko 2.4.0 drops support for python 2.6
pytest < 3.3.0 ; python_version < '2.7' # pytest 3.3.0 drops support for python 2.6
pytest < 5.0.0 ; python_version == '2.7' # pytest 5.0.0 and later will no longer support python 2.7
pytest-forked < 1.0.2 ; python_version < '2.7' # pytest-forked 1.0.2 and later require python 2.7 or later
pytest-forked >= 1.0.2 ; python_version >= '2.7' # pytest-forked before 1.0.2 does not work with pytest 4.2.0+ (which requires python 2.7+)
ntlm-auth >= 1.3.0 # message encryption support using cryptography
requests < 2.20.0 ; python_version < '2.7' # requests 2.20.0 drops support for python 2.6
requests-ntlm >= 1.1.0 # message encryption support
requests-credssp >= 0.1.0 # message encryption support
voluptuous >= 0.11.0 # Schema recursion via Self
openshift >= 0.6.2, < 0.9.0 # merge_type support
virtualenv < 16.0.0 ; python_version < '2.7' # virtualenv 16.0.0 and later require python 2.7 or later
pathspec < 0.6.0 ; python_version < '2.7' # pathspec 0.6.0 and later require python 2.7 or later
pyopenssl < 18.0.0 ; python_version < '2.7' # pyOpenSSL 18.0.0 and later require python 2.7 or later
pyparsing < 3.0.0 ; python_version < '3.5' # pyparsing 3 and later require python 3.5 or later
pyfmg == 0.6.1 # newer versions do not pass current unit tests
pyyaml < 5.1 ; python_version < '2.7' # pyyaml 5.1 and later require python 2.7 or later
pycparser < 2.19 ; python_version < '2.7' # pycparser 2.19 and later require python 2.7 or later
mock >= 2.0.0 # needed for features backported from Python 3.6 unittest.mock (assert_called, assert_called_once...)
pytest-mock >= 1.4.0 # needed for mock_use_standalone_module pytest option
xmltodict < 0.12.0 ; python_version < '2.7' # xmltodict 0.12.0 and later require python 2.7 or later
lxml < 4.3.0 ; python_version < '2.7' # lxml 4.3.0 and later require python 2.7 or later
pyvmomi < 6.0.0 ; python_version < '2.7' # pyvmomi 6.0.0 and later require python 2.7 or later
pyone == 1.1.9 # newer versions do not pass current integration tests
boto3 < 1.11 ; python_version < '2.7' # boto3 1.11 drops Python 2.6 support
botocore >= 1.10.0, < 1.14 ; python_version < '2.7' # adds support for the following AWS services: secretsmanager, fms, and acm-pca; botocore 1.14 drops Python 2.6 support
botocore >= 1.10.0 ; python_version >= '2.7' # adds support for the following AWS services: secretsmanager, fms, and acm-pca
setuptools < 37 ; python_version == '2.6' # setuptools 37 and later require python 2.7 or later
setuptools < 45 ; python_version == '2.7' # setuptools 45 and later require python 3.5 or later
gssapi < 1.6.0 ; python_version <= '2.7' # gssapi 1.6.0 and later require python 3 or later
# freeze antsibull-changelog for consistent test results
antsibull-changelog == 0.7.0
# Make sure we have a new enough antsibull for the CLI args we use
antsibull >= 0.21.0
# freeze pylint and its requirements for consistent test results
astroid == 2.2.5
isort == 4.3.15
lazy-object-proxy == 1.3.1
mccabe == 0.6.1
pylint == 2.3.1
typed-ast == 1.4.0 # 1.4.0 is required to compile on Python 3.8
wrapt == 1.11.1
# freeze pycodestyle for consistent test results
pycodestyle == 2.6.0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
test/lib/ansible_test/_data/requirements/sanity.pylint.txt
|
pylint ; python_version < '3.9' # installation fails on python 3.9.0b1
pyyaml # needed for collection_detail.py
mccabe # pylint complexity testing
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
test/lib/ansible_test/_internal/sanity/pylint.py
|
"""Sanity test using pylint."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import itertools
import json
import os
import datetime
from .. import types as t
from ..sanity import (
SanitySingleVersion,
SanityMessage,
SanityFailure,
SanitySuccess,
SANITY_ROOT,
)
from ..target import (
TestTarget,
)
from ..util import (
SubprocessError,
display,
ConfigParser,
is_subdir,
find_python,
)
from ..util_common import (
run_command,
)
from ..ansible_util import (
ansible_environment,
get_collection_detail,
CollectionDetail,
CollectionDetailError,
)
from ..config import (
SanityConfig,
)
from ..data import (
data_context,
)
class PylintTest(SanitySingleVersion):
"""Sanity test using pylint."""
def __init__(self):
super(PylintTest, self).__init__()
self.optional_error_codes.update([
'ansible-deprecated-date',
'too-complex',
])
@property
def error_code(self): # type: () -> t.Optional[str]
"""Error code for ansible-test matching the format used by the underlying test program, or None if the program does not use error codes."""
return 'ansible-test'
def filter_targets(self, targets): # type: (t.List[TestTarget]) -> t.List[TestTarget]
"""Return the given list of test targets, filtered to include only those relevant for the test."""
return [target for target in targets if os.path.splitext(target.path)[1] == '.py' or is_subdir(target.path, 'bin')]
def test(self, args, targets, python_version):
"""
:type args: SanityConfig
:type targets: SanityTargets
:type python_version: str
:rtype: TestResult
"""
plugin_dir = os.path.join(SANITY_ROOT, 'pylint', 'plugins')
plugin_names = sorted(p[0] for p in [
os.path.splitext(p) for p in os.listdir(plugin_dir)] if p[1] == '.py' and p[0] != '__init__')
settings = self.load_processor(args)
paths = [target.path for target in targets.include]
module_paths = [os.path.relpath(p, data_context().content.module_path).split(os.path.sep) for p in
paths if is_subdir(p, data_context().content.module_path)]
module_dirs = sorted(set([p[0] for p in module_paths if len(p) > 1]))
large_module_group_threshold = 500
large_module_groups = [key for key, value in
itertools.groupby(module_paths, lambda p: p[0] if len(p) > 1 else '') if len(list(value)) > large_module_group_threshold]
large_module_group_paths = [os.path.relpath(p, data_context().content.module_path).split(os.path.sep) for p in paths
if any(is_subdir(p, os.path.join(data_context().content.module_path, g)) for g in large_module_groups)]
large_module_group_dirs = sorted(set([os.path.sep.join(p[:2]) for p in large_module_group_paths if len(p) > 2]))
contexts = []
remaining_paths = set(paths)
def add_context(available_paths, context_name, context_filter):
"""
:type available_paths: set[str]
:type context_name: str
:type context_filter: (str) -> bool
"""
filtered_paths = set(p for p in available_paths if context_filter(p))
contexts.append((context_name, sorted(filtered_paths)))
available_paths -= filtered_paths
def filter_path(path_filter=None):
"""
:type path_filter: str
:rtype: (str) -> bool
"""
def context_filter(path_to_filter):
"""
:type path_to_filter: str
:rtype: bool
"""
return is_subdir(path_to_filter, path_filter)
return context_filter
for large_module_dir in large_module_group_dirs:
add_context(remaining_paths, 'modules/%s' % large_module_dir, filter_path(os.path.join(data_context().content.module_path, large_module_dir)))
for module_dir in module_dirs:
add_context(remaining_paths, 'modules/%s' % module_dir, filter_path(os.path.join(data_context().content.module_path, module_dir)))
add_context(remaining_paths, 'modules', filter_path(data_context().content.module_path))
add_context(remaining_paths, 'module_utils', filter_path(data_context().content.module_utils_path))
add_context(remaining_paths, 'units', filter_path(data_context().content.unit_path))
if data_context().content.collection:
add_context(remaining_paths, 'collection', lambda p: True)
else:
add_context(remaining_paths, 'validate-modules', filter_path('test/lib/ansible_test/_data/sanity/validate-modules/'))
add_context(remaining_paths, 'validate-modules-unit', filter_path('test/lib/ansible_test/tests/validate-modules-unit/'))
add_context(remaining_paths, 'sanity', filter_path('test/lib/ansible_test/_data/sanity/'))
add_context(remaining_paths, 'ansible-test', filter_path('test/lib/'))
add_context(remaining_paths, 'test', filter_path('test/'))
add_context(remaining_paths, 'hacking', filter_path('hacking/'))
add_context(remaining_paths, 'ansible', lambda p: True)
messages = []
context_times = []
python = find_python(python_version)
collection_detail = None
if data_context().content.collection:
try:
collection_detail = get_collection_detail(args, python)
if not collection_detail.version:
display.warning('Skipping pylint collection version checks since no collection version was found.')
except CollectionDetailError as ex:
display.warning('Skipping pylint collection version checks since collection detail loading failed: %s' % ex.reason)
test_start = datetime.datetime.utcnow()
for context, context_paths in sorted(contexts):
if not context_paths:
continue
context_start = datetime.datetime.utcnow()
messages += self.pylint(args, context, context_paths, plugin_dir, plugin_names, python, collection_detail)
context_end = datetime.datetime.utcnow()
context_times.append('%s: %d (%s)' % (context, len(context_paths), context_end - context_start))
test_end = datetime.datetime.utcnow()
for context_time in context_times:
display.info(context_time, verbosity=4)
display.info('total: %d (%s)' % (len(paths), test_end - test_start), verbosity=4)
errors = [SanityMessage(
message=m['message'].replace('\n', ' '),
path=m['path'],
line=int(m['line']),
column=int(m['column']),
level=m['type'],
code=m['symbol'],
) for m in messages]
if args.explain:
return SanitySuccess(self.name)
errors = settings.process_errors(errors, paths)
if errors:
return SanityFailure(self.name, messages=errors)
return SanitySuccess(self.name)
@staticmethod
def pylint(
args, # type: SanityConfig
context, # type: str
paths, # type: t.List[str]
plugin_dir, # type: str
plugin_names, # type: t.List[str]
python, # type: str
collection_detail, # type: CollectionDetail
): # type: (...) -> t.List[t.Dict[str, str]]
"""Run pylint using the config specified by the context on the specified paths."""
rcfile = os.path.join(SANITY_ROOT, 'pylint', 'config', context.split('/')[0] + '.cfg')
if not os.path.exists(rcfile):
if data_context().content.collection:
rcfile = os.path.join(SANITY_ROOT, 'pylint', 'config', 'collection.cfg')
else:
rcfile = os.path.join(SANITY_ROOT, 'pylint', 'config', 'default.cfg')
parser = ConfigParser()
parser.read(rcfile)
if parser.has_section('ansible-test'):
config = dict(parser.items('ansible-test'))
else:
config = dict()
disable_plugins = set(i.strip() for i in config.get('disable-plugins', '').split(',') if i)
load_plugins = set(plugin_names + ['pylint.extensions.mccabe']) - disable_plugins
cmd = [
python,
'-m', 'pylint',
'--jobs', '0',
'--reports', 'n',
'--max-line-length', '160',
'--max-complexity', '20',
'--rcfile', rcfile,
'--output-format', 'json',
'--load-plugins', ','.join(load_plugins),
] + paths
if data_context().content.collection:
cmd.extend(['--collection-name', data_context().content.collection.full_name])
if collection_detail and collection_detail.version:
cmd.extend(['--collection-version', collection_detail.version])
append_python_path = [plugin_dir]
if data_context().content.collection:
append_python_path.append(data_context().content.collection.root)
env = ansible_environment(args)
env['PYTHONPATH'] += os.path.pathsep + os.path.pathsep.join(append_python_path)
# expose plugin paths for use in custom plugins
env.update(dict(('ANSIBLE_TEST_%s_PATH' % k.upper(), os.path.abspath(v) + os.path.sep) for k, v in data_context().content.plugin_paths.items()))
if paths:
display.info('Checking %d file(s) in context "%s" with config: %s' % (len(paths), context, rcfile), verbosity=1)
try:
stdout, stderr = run_command(args, cmd, env=env, capture=True)
status = 0
except SubprocessError as ex:
stdout = ex.stdout
stderr = ex.stderr
status = ex.status
if stderr or status >= 32:
raise SubprocessError(cmd=cmd, status=status, stderr=stderr, stdout=stdout)
else:
stdout = None
if not args.explain and stdout:
messages = json.loads(stdout)
else:
messages = []
return messages
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,118 |
pylint sanity test fails on Python 3.8
|
##### SUMMARY
When running `ansible-test sanity --venv --python 3.8 --test pylint plugins/` inside the `theforeman.foreman` collection (https://github.com/theforeman/foreman-ansible-modules), the test fails with
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/egolov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/lib64/python3.7/site-packages/ansible
executable location = /home/egolov/Devel/theforeman/foreman-ansible-modules/venv3/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test sanity --venv --python 3.8 --test pylint plugins/
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No errors / or at least valid sanity/pylint errors ;)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ERROR: plugins/module_utils/foreman_helper.py:13:0: syntax-error: Cannot import 'contextlib' due to syntax error 'invalid syntax (<unknown>, line 380)'
ERROR: plugins/module_utils/foreman_helper.py:15:0: syntax-error: Cannot import 'collections' due to syntax error 'invalid syntax (<unknown>, line 96)'
ERROR: plugins/module_utils/foreman_helper.py:16:0: syntax-error: Cannot import 'functools' due to syntax error 'invalid syntax (<unknown>, line 276)'
```
##### ADDITIONAL INFO
I've traced the issue to `astroid-2.2.5` (a dependency of `pylint`), when using `astroid-2.3.3` everything works.
|
https://github.com/ansible/ansible/issues/67118
|
https://github.com/ansible/ansible/pull/72972
|
7eee2454f617569fd6889f2211f75bc02a35f9f8
|
37d09f24882c1f03be9900e610d53587cfa6bbd6
| 2020-02-05T11:13:31Z |
python
| 2020-12-15T18:27:32Z |
test/sanity/ignore.txt
|
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/play.yml shebang
examples/scripts/my_test.py shebang # example module but not in a normal module location
examples/scripts/my_test_facts.py shebang # example module but not in a normal module location
examples/scripts/my_test_info.py shebang # example module but not in a normal module location
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.6!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.7!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-3.5!skip # release process and docs build only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.6!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-2.7!skip # release and docs process only, 3.6+ required
hacking/build_library/build_ansible/commands.py compile-3.5!skip # release and docs process only, 3.6+ required
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/galaxy/collection/__init__.py compile-2.6!skip # 'ansible-galaxy collection' requires 2.7+
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:blacklisted-name
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:blacklisted-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:blacklisted-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:blacklisted-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:blacklisted-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:blacklisted-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:blacklisted-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate # testing Python 2.x implicit relative imports
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/incidental_win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/incidental_win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/lib/ansible_test/_data/requirements/constraints.txt test-constraints
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/database.py future-import-boilerplate
test/support/integration/plugins/module_utils/database.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
test/support/integration/plugins/module_utils/postgres.py metaclass-boilerplate
test/support/integration/plugins/modules/lvg.py pylint:blacklisted-name
test/support/integration/plugins/modules/timezone.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/cfg/base.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/network.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/parsing.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/restconf/restconf.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/doc_fragments/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/module_utils/network/ios/ios.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/doc_fragments/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/module_utils/network/vyos/vyos.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:blacklisted-name
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py metaclass-boilerplate
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_dsc.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_feature.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_find.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_security_policy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/modules/test_apt.py pylint:blacklisted-name
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py future-import-boilerplate # test expects no boilerplate
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/module_utils/my_util.py metaclass-boilerplate # test expects no boilerplate
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
test/utils/shippable/check_matrix.py replace-urlopen
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,975 |
Hashes are merged instead of replace
|
##### SUMMARY
Inventory merges hashes instead of replacing them. If you have the same hash variable in 2 different file variables will be merged.
hash_behaviour is not configured, the default "replace".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Inventory plugin
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = None
configured module search path = ['/<snipped>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <snipped>/lib64/python3.9/site-packages/ansible
executable location = <snipped>/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
```
Also effects:
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/<snipped>.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /<snipped>/lib64/python3.6/site-packages/ansible
executable location = <snipped>/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Tested on the latest Fedora 33 and CentOs7
##### STEPS TO REPRODUCE
Create the following two inventory file:
Inventory file1:
```yaml
all:
hosts:
localhost:
vars:
test_hash:
key1: value1
key2: value2
```
Inventory file2:
```yaml
all:
hosts:
localhost:
vars:
test_hash:
key1: other_value1
```
Then execute the following command:
```bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
"key2": "value2"
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
##### EXPECTED RESULTS
_"key2": "value2"_ shouldn't be there because the second test_hash variable should replace the first one.
This was the default behavior before this bug.
``` bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
##### ACTUAL RESULTS
As you can see the hashes are merged instead of replacing.
```bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
"key2": "value2"
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
|
https://github.com/ansible/ansible/issues/72975
|
https://github.com/ansible/ansible/pull/72979
|
6487a239c0a085041a6c421bced5c354e4a94290
|
5e03e322de5b43b69c8aad5c0cb92e82ce0f3d17
| 2020-12-15T11:49:49Z |
python
| 2020-12-16T16:23:23Z |
changelogs/fragments/72979-fix-inventory-merge-hash-replace.yaml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.