status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
test/integration/targets/handlers/roles/include_role_include_tasks_handler/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_listen_role_dedup.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'a handler from a role')" = "1" ]
ansible localhost -m include_role -a "name=r1-dep_chain-vars" "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
test/integration/targets/handlers/test_include_tasks_in_include_role.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,478 |
Missing option passno and dump in ansible_mounts
|
### Summary
We want to migrate certain entries from /etc/fstab from one system to another. It an easy task, as `ansible.posix.mount` provides an interface to write all necessary data.
The most of the data is automatically discovered by Ansible and is available in `ansible_mounts`. Unfortunately, the options for `dump` and `passno` as defined in [/etc/fstab](https://man7.org/linux/man-pages/man5/fstab.5.html) are not available.
Please add the missing options to the `ansible_mounts` structure.
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.8]
config file = /home/carsten/ansible/ansible.cfg
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible --version
ansible [core 2.13.8]
config file = /home/carsten/ansible/ansible.cfg
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
$ ansible-config dump --only-changed -t all | cat
CACHE_PLUGIN(/home/carsten/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/carsten/ansible/ansible.cfg) = ./facts
CACHE_PLUGIN_TIMEOUT(/home/carsten/ansible/ansible.cfg) = 28800
DEFAULT_GATHERING(/home/carsten/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/carsten/ansible/ansible.cfg) = ['/home/carsten/ansible/hosts']
DEFAULT_ROLES_PATH(/home/carsten/ansible/ansible.cfg) = ['/home/carsten/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/carsten/ansible/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/carsten/ansible/ansible.cfg) = /home/carsten/.ansible_vault_password
HOST_KEY_CHECKING(/home/carsten/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/home/carsten/ansible/ansible.cfg) = auto_silent
CACHE:
=====
jsonfile:
________
_timeout(/home/carsten/ansible/ansible.cfg) = 28800
_uri(/home/carsten/ansible/ansible.cfg) = /home/carsten/ansible/facts
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/carsten/ansible/ansible.cfg) = False
ssh_args(/home/carsten/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=1200 -o ServerAliveInterval=180 -o StrictHostKeyChecking=no
ssh:
___
control_path(/home/carsten/ansible/ansible.cfg) = %(directory)s/%%C
host_key_checking(/home/carsten/ansible/ansible.cfg) = False
pipelining(/home/carsten/ansible/ansible.cfg) = True
ssh_args(/home/carsten/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=1200 -o ServerAliveInterval=180 -o StrictHostKeyChecking=no
```
### OS / Environment
$ cat /etc/os-release
NAME="SLES"
VERSION="15-SP4"
VERSION_ID="15.4"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP4"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp4"
DOCUMENTATION_URL="https://documentation.suse.com/"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test playbook
hosts: localhost
tasks:
- name: Show details for /home mount
debug:
var: item
with_items: "{{ ansible_mounts }}"
when: item.mount == '/home'
...
```
```
# grep home /etc/fstab
UUID=63ac199a-7e91-437c-851a-750ab634578e /home ext4 defaults 0 0
```
### Expected Results
```console
ok: [localhost] => (item={'block_available': 120061977, 'block_size': 4096, 'block_total': 128734272, 'block_used': 8672295, 'device': '/dev/sdb', 'fstype': 'ext4', 'inode_available': 32252577, 'inode_total': 32768000, 'inode_used': 515423, 'mount': '/home', 'options': 'rw,relatime', 'size_available': 491773857792, 'size_total': 527295578112, 'uuid': '63ac199a-7e91-437c-851a-750ab634578e'}) =>
ansible_loop_var: item
item:
block_available: 120061977
block_size: 4096
block_total: 128734272
block_used: 8672295
device: /dev/sdb
dump: 0 <---- new
fstype: ext4
inode_available: 32252577
inode_total: 32768000
inode_used: 515423
mount: /home
options: rw,relatime
passno: 0 <---- new
size_available: 491773857792
size_total: 527295578112
uuid: 63ac199a-7e91-437c-851a-750ab634578e
````
### Actual Results
```console
ok: [localhost] => (item={'block_available': 120061977, 'block_size': 4096, 'block_total': 128734272, 'block_used': 8672295, 'device': '/dev/sdb', 'fstype': 'ext4', 'inode_available': 32252577, 'inode_total': 32768000, 'inode_used': 515423, 'mount': '/home', 'options': 'rw,relatime', 'size_available': 491773857792, 'size_total': 527295578112, 'uuid': '63ac199a-7e91-437c-851a-750ab634578e'}) =>
ansible_loop_var: item
item:
block_available: 120061977
block_size: 4096
block_total: 128734272
block_used: 8672295
device: /dev/sdb
fstype: ext4
inode_available: 32252577
inode_total: 32768000
inode_used: 515423
mount: /home
options: rw,relatime
size_available: 491773857792
size_total: 527295578112
uuid: 63ac199a-7e91-437c-851a-750ab634578e
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80478
|
https://github.com/ansible/ansible/pull/81768
|
230f956e255ea1a98c57e947b341f89bf0b93abc
|
51f2ddd445e91765be4decd4f594adf781d15867
| 2023-04-11T14:24:47Z |
python
| 2023-09-26T15:12:03Z |
changelogs/fragments/80478-extend-mount-info.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,478 |
Missing option passno and dump in ansible_mounts
|
### Summary
We want to migrate certain entries from /etc/fstab from one system to another. It an easy task, as `ansible.posix.mount` provides an interface to write all necessary data.
The most of the data is automatically discovered by Ansible and is available in `ansible_mounts`. Unfortunately, the options for `dump` and `passno` as defined in [/etc/fstab](https://man7.org/linux/man-pages/man5/fstab.5.html) are not available.
Please add the missing options to the `ansible_mounts` structure.
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.8]
config file = /home/carsten/ansible/ansible.cfg
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible --version
ansible [core 2.13.8]
config file = /home/carsten/ansible/ansible.cfg
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
$ ansible-config dump --only-changed -t all | cat
CACHE_PLUGIN(/home/carsten/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/carsten/ansible/ansible.cfg) = ./facts
CACHE_PLUGIN_TIMEOUT(/home/carsten/ansible/ansible.cfg) = 28800
DEFAULT_GATHERING(/home/carsten/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/carsten/ansible/ansible.cfg) = ['/home/carsten/ansible/hosts']
DEFAULT_ROLES_PATH(/home/carsten/ansible/ansible.cfg) = ['/home/carsten/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/carsten/ansible/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/carsten/ansible/ansible.cfg) = /home/carsten/.ansible_vault_password
HOST_KEY_CHECKING(/home/carsten/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/home/carsten/ansible/ansible.cfg) = auto_silent
CACHE:
=====
jsonfile:
________
_timeout(/home/carsten/ansible/ansible.cfg) = 28800
_uri(/home/carsten/ansible/ansible.cfg) = /home/carsten/ansible/facts
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/carsten/ansible/ansible.cfg) = False
ssh_args(/home/carsten/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=1200 -o ServerAliveInterval=180 -o StrictHostKeyChecking=no
ssh:
___
control_path(/home/carsten/ansible/ansible.cfg) = %(directory)s/%%C
host_key_checking(/home/carsten/ansible/ansible.cfg) = False
pipelining(/home/carsten/ansible/ansible.cfg) = True
ssh_args(/home/carsten/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=1200 -o ServerAliveInterval=180 -o StrictHostKeyChecking=no
```
### OS / Environment
$ cat /etc/os-release
NAME="SLES"
VERSION="15-SP4"
VERSION_ID="15.4"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP4"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp4"
DOCUMENTATION_URL="https://documentation.suse.com/"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test playbook
hosts: localhost
tasks:
- name: Show details for /home mount
debug:
var: item
with_items: "{{ ansible_mounts }}"
when: item.mount == '/home'
...
```
```
# grep home /etc/fstab
UUID=63ac199a-7e91-437c-851a-750ab634578e /home ext4 defaults 0 0
```
### Expected Results
```console
ok: [localhost] => (item={'block_available': 120061977, 'block_size': 4096, 'block_total': 128734272, 'block_used': 8672295, 'device': '/dev/sdb', 'fstype': 'ext4', 'inode_available': 32252577, 'inode_total': 32768000, 'inode_used': 515423, 'mount': '/home', 'options': 'rw,relatime', 'size_available': 491773857792, 'size_total': 527295578112, 'uuid': '63ac199a-7e91-437c-851a-750ab634578e'}) =>
ansible_loop_var: item
item:
block_available: 120061977
block_size: 4096
block_total: 128734272
block_used: 8672295
device: /dev/sdb
dump: 0 <---- new
fstype: ext4
inode_available: 32252577
inode_total: 32768000
inode_used: 515423
mount: /home
options: rw,relatime
passno: 0 <---- new
size_available: 491773857792
size_total: 527295578112
uuid: 63ac199a-7e91-437c-851a-750ab634578e
````
### Actual Results
```console
ok: [localhost] => (item={'block_available': 120061977, 'block_size': 4096, 'block_total': 128734272, 'block_used': 8672295, 'device': '/dev/sdb', 'fstype': 'ext4', 'inode_available': 32252577, 'inode_total': 32768000, 'inode_used': 515423, 'mount': '/home', 'options': 'rw,relatime', 'size_available': 491773857792, 'size_total': 527295578112, 'uuid': '63ac199a-7e91-437c-851a-750ab634578e'}) =>
ansible_loop_var: item
item:
block_available: 120061977
block_size: 4096
block_total: 128734272
block_used: 8672295
device: /dev/sdb
fstype: ext4
inode_available: 32252577
inode_total: 32768000
inode_used: 515423
mount: /home
options: rw,relatime
size_available: 491773857792
size_total: 527295578112
uuid: 63ac199a-7e91-437c-851a-750ab634578e
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80478
|
https://github.com/ansible/ansible/pull/81768
|
230f956e255ea1a98c57e947b341f89bf0b93abc
|
51f2ddd445e91765be4decd4f594adf781d15867
| 2023-04-11T14:24:47Z |
python
| 2023-09-26T15:12:03Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import errno
import glob
import json
import os
import re
import sys
import time
from multiprocessing import cpu_count
from multiprocessing.pool import ThreadPool
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines, get_mount_size
from ansible.module_utils.six import iteritems
# import this as a module to ensure we get the same module instance
from ansible.module_utils.facts import timeout
def get_partition_uuid(partname):
try:
uuids = os.listdir("/dev/disk/by-uuid")
except OSError:
return
for uuid in uuids:
dev = os.path.realpath("/dev/disk/by-uuid/" + uuid)
if dev == ("/dev/" + partname):
return uuid
return None
class LinuxHardware(Hardware):
"""
Linux-specific subclass of Hardware. Defines memory and CPU facts:
- memfree_mb
- memtotal_mb
- swapfree_mb
- swaptotal_mb
- processor (a list)
- processor_cores
- processor_count
In addition, it also defines number of DMI facts and device facts.
"""
platform = 'Linux'
# Originally only had these four as toplevelfacts
ORIGINAL_MEMORY_FACTS = frozenset(('MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'))
# Now we have all of these in a dict structure
MEMORY_FACTS = ORIGINAL_MEMORY_FACTS.union(('Buffers', 'Cached', 'SwapCached'))
# regex used against findmnt output to detect bind mounts
BIND_MOUNT_RE = re.compile(r'.*\]')
# regex used against mtab content to find entries that are bind mounts
MTAB_BIND_MOUNT_RE = re.compile(r'.*bind.*"')
# regex used for replacing octal escape sequences
OCTAL_ESCAPE_RE = re.compile(r'\\[0-9]{3}')
def populate(self, collected_facts=None):
hardware_facts = {}
locale = get_best_parsable_locale(self.module)
self.module.run_command_environ_update = {'LANG': locale, 'LC_ALL': locale, 'LC_NUMERIC': locale}
cpu_facts = self.get_cpu_facts(collected_facts=collected_facts)
memory_facts = self.get_memory_facts()
dmi_facts = self.get_dmi_facts()
device_facts = self.get_device_facts()
uptime_facts = self.get_uptime_facts()
lvm_facts = self.get_lvm_facts()
mount_facts = {}
try:
mount_facts = self.get_mount_facts()
except timeout.TimeoutError:
self.module.warn("No mount facts were gathered due to timeout.")
hardware_facts.update(cpu_facts)
hardware_facts.update(memory_facts)
hardware_facts.update(dmi_facts)
hardware_facts.update(device_facts)
hardware_facts.update(uptime_facts)
hardware_facts.update(lvm_facts)
hardware_facts.update(mount_facts)
return hardware_facts
def get_memory_facts(self):
memory_facts = {}
if not os.access("/proc/meminfo", os.R_OK):
return memory_facts
memstats = {}
for line in get_file_lines("/proc/meminfo"):
data = line.split(":", 1)
key = data[0]
if key in self.ORIGINAL_MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memory_facts["%s_mb" % key.lower()] = int(val) // 1024
if key in self.MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memstats[key.lower()] = int(val) // 1024
if None not in (memstats.get('memtotal'), memstats.get('memfree')):
memstats['real:used'] = memstats['memtotal'] - memstats['memfree']
if None not in (memstats.get('cached'), memstats.get('memfree'), memstats.get('buffers')):
memstats['nocache:free'] = memstats['cached'] + memstats['memfree'] + memstats['buffers']
if None not in (memstats.get('memtotal'), memstats.get('nocache:free')):
memstats['nocache:used'] = memstats['memtotal'] - memstats['nocache:free']
if None not in (memstats.get('swaptotal'), memstats.get('swapfree')):
memstats['swap:used'] = memstats['swaptotal'] - memstats['swapfree']
memory_facts['memory_mb'] = {
'real': {
'total': memstats.get('memtotal'),
'used': memstats.get('real:used'),
'free': memstats.get('memfree'),
},
'nocache': {
'free': memstats.get('nocache:free'),
'used': memstats.get('nocache:used'),
},
'swap': {
'total': memstats.get('swaptotal'),
'free': memstats.get('swapfree'),
'used': memstats.get('swap:used'),
'cached': memstats.get('swapcached'),
},
}
return memory_facts
def get_cpu_facts(self, collected_facts=None):
cpu_facts = {}
collected_facts = collected_facts or {}
i = 0
vendor_id_occurrence = 0
model_name_occurrence = 0
processor_occurrence = 0
physid = 0
coreid = 0
sockets = {}
cores = {}
zp = 0
zmt = 0
xen = False
xen_paravirt = False
try:
if os.path.exists('/proc/xen'):
xen = True
else:
for line in get_file_lines('/sys/hypervisor/type'):
if line.strip() == 'xen':
xen = True
# Only interested in the first line
break
except IOError:
pass
if not os.access("/proc/cpuinfo", os.R_OK):
return cpu_facts
cpu_facts['processor'] = []
for line in get_file_lines('/proc/cpuinfo'):
data = line.split(":", 1)
key = data[0].strip()
try:
val = data[1].strip()
except IndexError:
val = ""
if xen:
if key == 'flags':
# Check for vme cpu flag, Xen paravirt does not expose this.
# Need to detect Xen paravirt because it exposes cpuinfo
# differently than Xen HVM or KVM and causes reporting of
# only a single cpu core.
if 'vme' not in val:
xen_paravirt = True
# model name is for Intel arch, Processor (mind the uppercase P)
# works for some ARM devices, like the Sheevaplug.
if key in ['model name', 'Processor', 'vendor_id', 'cpu', 'Vendor', 'processor']:
if 'processor' not in cpu_facts:
cpu_facts['processor'] = []
cpu_facts['processor'].append(val)
if key == 'vendor_id':
vendor_id_occurrence += 1
if key == 'model name':
model_name_occurrence += 1
if key == 'processor':
processor_occurrence += 1
i += 1
elif key == 'physical id':
physid = val
if physid not in sockets:
sockets[physid] = 1
elif key == 'core id':
coreid = val
if coreid not in sockets:
cores[coreid] = 1
elif key == 'cpu cores':
sockets[physid] = int(val)
elif key == 'siblings':
cores[coreid] = int(val)
# S390x classic cpuinfo
elif key == '# processors':
zp = int(val)
elif key == 'max thread id':
zmt = int(val) + 1
# SPARC
elif key == 'ncpus active':
i = int(val)
# Skip for platforms without vendor_id/model_name in cpuinfo (e.g ppc64le)
if vendor_id_occurrence > 0:
if vendor_id_occurrence == model_name_occurrence:
i = vendor_id_occurrence
# The fields for ARM CPUs do not always include 'vendor_id' or 'model name',
# and sometimes includes both 'processor' and 'Processor'.
# The fields for Power CPUs include 'processor' and 'cpu'.
# Always use 'processor' count for ARM and Power systems
if collected_facts.get('ansible_architecture', '').startswith(('armv', 'aarch', 'ppc')):
i = processor_occurrence
if collected_facts.get('ansible_architecture') == 's390x':
# getting sockets would require 5.7+ with CONFIG_SCHED_TOPOLOGY
cpu_facts['processor_count'] = 1
cpu_facts['processor_cores'] = zp // zmt
cpu_facts['processor_threads_per_core'] = zmt
cpu_facts['processor_vcpus'] = zp
cpu_facts['processor_nproc'] = zp
else:
if xen_paravirt:
cpu_facts['processor_count'] = i
cpu_facts['processor_cores'] = i
cpu_facts['processor_threads_per_core'] = 1
cpu_facts['processor_vcpus'] = i
cpu_facts['processor_nproc'] = i
else:
if sockets:
cpu_facts['processor_count'] = len(sockets)
else:
cpu_facts['processor_count'] = i
socket_values = list(sockets.values())
if socket_values and socket_values[0]:
cpu_facts['processor_cores'] = socket_values[0]
else:
cpu_facts['processor_cores'] = 1
core_values = list(cores.values())
if core_values:
cpu_facts['processor_threads_per_core'] = core_values[0] // cpu_facts['processor_cores']
else:
cpu_facts['processor_threads_per_core'] = 1 // cpu_facts['processor_cores']
cpu_facts['processor_vcpus'] = (cpu_facts['processor_threads_per_core'] *
cpu_facts['processor_count'] * cpu_facts['processor_cores'])
cpu_facts['processor_nproc'] = processor_occurrence
# if the number of processors available to the module's
# thread cannot be determined, the processor count
# reported by /proc will be the default (as previously defined)
try:
cpu_facts['processor_nproc'] = len(
os.sched_getaffinity(0)
)
except AttributeError:
# In Python < 3.3, os.sched_getaffinity() is not available
try:
cmd = get_bin_path('nproc')
except ValueError:
pass
else:
rc, out, _err = self.module.run_command(cmd)
if rc == 0:
cpu_facts['processor_nproc'] = int(out)
return cpu_facts
def get_dmi_facts(self):
''' learn dmi facts from system
Try /sys first for dmi related facts.
If that is not available, fall back to dmidecode executable '''
dmi_facts = {}
if os.path.exists('/sys/devices/virtual/dmi/id/product_name'):
# Use kernel DMI info, if available
# DMI SPEC -- https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.2.0.pdf
FORM_FACTOR = ["Unknown", "Other", "Unknown", "Desktop",
"Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower",
"Portable", "Laptop", "Notebook", "Hand Held", "Docking Station",
"All In One", "Sub Notebook", "Space-saving", "Lunch Box",
"Main Server Chassis", "Expansion Chassis", "Sub Chassis",
"Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis",
"Rack Mount Chassis", "Sealed-case PC", "Multi-system",
"CompactPCI", "AdvancedTCA", "Blade", "Blade Enclosure",
"Tablet", "Convertible", "Detachable", "IoT Gateway",
"Embedded PC", "Mini PC", "Stick PC"]
DMI_DICT = {
'bios_date': '/sys/devices/virtual/dmi/id/bios_date',
'bios_vendor': '/sys/devices/virtual/dmi/id/bios_vendor',
'bios_version': '/sys/devices/virtual/dmi/id/bios_version',
'board_asset_tag': '/sys/devices/virtual/dmi/id/board_asset_tag',
'board_name': '/sys/devices/virtual/dmi/id/board_name',
'board_serial': '/sys/devices/virtual/dmi/id/board_serial',
'board_vendor': '/sys/devices/virtual/dmi/id/board_vendor',
'board_version': '/sys/devices/virtual/dmi/id/board_version',
'chassis_asset_tag': '/sys/devices/virtual/dmi/id/chassis_asset_tag',
'chassis_serial': '/sys/devices/virtual/dmi/id/chassis_serial',
'chassis_vendor': '/sys/devices/virtual/dmi/id/chassis_vendor',
'chassis_version': '/sys/devices/virtual/dmi/id/chassis_version',
'form_factor': '/sys/devices/virtual/dmi/id/chassis_type',
'product_name': '/sys/devices/virtual/dmi/id/product_name',
'product_serial': '/sys/devices/virtual/dmi/id/product_serial',
'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid',
'product_version': '/sys/devices/virtual/dmi/id/product_version',
'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor',
}
for (key, path) in DMI_DICT.items():
data = get_file_content(path)
if data is not None:
if key == 'form_factor':
try:
dmi_facts['form_factor'] = FORM_FACTOR[int(data)]
except IndexError:
dmi_facts['form_factor'] = 'unknown (%s)' % data
else:
dmi_facts[key] = data
else:
dmi_facts[key] = 'NA'
else:
# Fall back to using dmidecode, if available
dmi_bin = self.module.get_bin_path('dmidecode')
DMI_DICT = {
'bios_date': 'bios-release-date',
'bios_vendor': 'bios-vendor',
'bios_version': 'bios-version',
'board_asset_tag': 'baseboard-asset-tag',
'board_name': 'baseboard-product-name',
'board_serial': 'baseboard-serial-number',
'board_vendor': 'baseboard-manufacturer',
'board_version': 'baseboard-version',
'chassis_asset_tag': 'chassis-asset-tag',
'chassis_serial': 'chassis-serial-number',
'chassis_vendor': 'chassis-manufacturer',
'chassis_version': 'chassis-version',
'form_factor': 'chassis-type',
'product_name': 'system-product-name',
'product_serial': 'system-serial-number',
'product_uuid': 'system-uuid',
'product_version': 'system-version',
'system_vendor': 'system-manufacturer',
}
for (k, v) in DMI_DICT.items():
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s %s' % (dmi_bin, v))
if rc == 0:
# Strip out commented lines (specific dmidecode output)
thisvalue = ''.join([line for line in out.splitlines() if not line.startswith('#')])
try:
json.dumps(thisvalue)
except UnicodeDecodeError:
thisvalue = "NA"
dmi_facts[k] = thisvalue
else:
dmi_facts[k] = 'NA'
else:
dmi_facts[k] = 'NA'
return dmi_facts
def _run_lsblk(self, lsblk_path):
# call lsblk and collect all uuids
# --exclude 2 makes lsblk ignore floppy disks, which are slower to answer than typical timeouts
# this uses the linux major device number
# for details see https://www.kernel.org/doc/Documentation/devices.txt
args = ['--list', '--noheadings', '--paths', '--output', 'NAME,UUID', '--exclude', '2']
cmd = [lsblk_path] + args
rc, out, err = self.module.run_command(cmd)
return rc, out, err
def _lsblk_uuid(self):
uuids = {}
lsblk_path = self.module.get_bin_path("lsblk")
if not lsblk_path:
return uuids
rc, out, err = self._run_lsblk(lsblk_path)
if rc != 0:
return uuids
# each line will be in format:
# <devicename><some whitespace><uuid>
# /dev/sda1 32caaec3-ef40-4691-a3b6-438c3f9bc1c0
for lsblk_line in out.splitlines():
if not lsblk_line:
continue
line = lsblk_line.strip()
fields = line.rsplit(None, 1)
if len(fields) < 2:
continue
device_name, uuid = fields[0].strip(), fields[1].strip()
if device_name in uuids:
continue
uuids[device_name] = uuid
return uuids
def _udevadm_uuid(self, device):
# fallback for versions of lsblk <= 2.23 that don't have --paths, see _run_lsblk() above
uuid = 'N/A'
udevadm_path = self.module.get_bin_path('udevadm')
if not udevadm_path:
return uuid
cmd = [udevadm_path, 'info', '--query', 'property', '--name', device]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
return uuid
# a snippet of the output of the udevadm command below will be:
# ...
# ID_FS_TYPE=ext4
# ID_FS_USAGE=filesystem
# ID_FS_UUID=57b1a3e7-9019-4747-9809-7ec52bba9179
# ...
m = re.search('ID_FS_UUID=(.*)\n', out)
if m:
uuid = m.group(1)
return uuid
def _run_findmnt(self, findmnt_path):
args = ['--list', '--noheadings', '--notruncate']
cmd = [findmnt_path] + args
rc, out, err = self.module.run_command(cmd, errors='surrogate_then_replace')
return rc, out, err
def _find_bind_mounts(self):
bind_mounts = set()
findmnt_path = self.module.get_bin_path("findmnt")
if not findmnt_path:
return bind_mounts
rc, out, err = self._run_findmnt(findmnt_path)
if rc != 0:
return bind_mounts
# find bind mounts, in case /etc/mtab is a symlink to /proc/mounts
for line in out.splitlines():
fields = line.split()
# fields[0] is the TARGET, fields[1] is the SOURCE
if len(fields) < 2:
continue
# bind mounts will have a [/directory_name] in the SOURCE column
if self.BIND_MOUNT_RE.match(fields[1]):
bind_mounts.add(fields[0])
return bind_mounts
def _mtab_entries(self):
mtab_file = '/etc/mtab'
if not os.path.exists(mtab_file):
mtab_file = '/proc/mounts'
mtab = get_file_content(mtab_file, '')
mtab_entries = []
for line in mtab.splitlines():
fields = line.split()
if len(fields) < 4:
continue
mtab_entries.append(fields)
return mtab_entries
@staticmethod
def _replace_octal_escapes_helper(match):
# Convert to integer using base8 and then convert to character
return chr(int(match.group()[1:], 8))
def _replace_octal_escapes(self, value):
return self.OCTAL_ESCAPE_RE.sub(self._replace_octal_escapes_helper, value)
def get_mount_info(self, mount, device, uuids):
mount_size = get_mount_size(mount)
# _udevadm_uuid is a fallback for versions of lsblk <= 2.23 that don't have --paths
# see _run_lsblk() above
# https://github.com/ansible/ansible/issues/36077
uuid = uuids.get(device, self._udevadm_uuid(device))
return mount_size, uuid
def get_mount_facts(self):
mounts = []
# gather system lists
bind_mounts = self._find_bind_mounts()
uuids = self._lsblk_uuid()
mtab_entries = self._mtab_entries()
# start threads to query each mount
results = {}
pool = ThreadPool(processes=min(len(mtab_entries), cpu_count()))
maxtime = timeout.GATHER_TIMEOUT or timeout.DEFAULT_GATHER_TIMEOUT
for fields in mtab_entries:
# Transform octal escape sequences
fields = [self._replace_octal_escapes(field) for field in fields]
device, mount, fstype, options = fields[0], fields[1], fields[2], fields[3]
if not device.startswith(('/', '\\')) and ':/' not in device or fstype == 'none':
continue
mount_info = {'mount': mount,
'device': device,
'fstype': fstype,
'options': options}
if mount in bind_mounts:
# only add if not already there, we might have a plain /etc/mtab
if not self.MTAB_BIND_MOUNT_RE.match(options):
mount_info['options'] += ",bind"
results[mount] = {'info': mount_info,
'extra': pool.apply_async(self.get_mount_info, (mount, device, uuids)),
'timelimit': time.time() + maxtime}
pool.close() # done with new workers, start gc
# wait for workers and get results
while results:
for mount in list(results):
done = False
res = results[mount]['extra']
try:
if res.ready():
done = True
if res.successful():
mount_size, uuid = res.get()
if mount_size:
results[mount]['info'].update(mount_size)
results[mount]['info']['uuid'] = uuid or 'N/A'
else:
# failed, try to find out why, if 'res.successful' we know there are no exceptions
results[mount]['info']['note'] = 'Could not get extra information: %s.' % (to_text(res.get()))
elif time.time() > results[mount]['timelimit']:
done = True
self.module.warn("Timeout exceeded when getting mount info for %s" % mount)
results[mount]['info']['note'] = 'Could not get extra information due to timeout'
except Exception as e:
import traceback
done = True
results[mount]['info'] = 'N/A'
self.module.warn("Error prevented getting extra info for mount %s: [%s] %s." % (mount, type(e), to_text(e)))
self.module.debug(traceback.format_exc())
if done:
# move results outside and make loop only handle pending
mounts.append(results[mount]['info'])
del results[mount]
# avoid cpu churn, sleep between retrying for loop with remaining mounts
time.sleep(0.1)
return {'mounts': mounts}
def get_device_links(self, link_dir):
if not os.path.exists(link_dir):
return {}
try:
retval = collections.defaultdict(set)
for entry in os.listdir(link_dir):
try:
target = os.path.basename(os.readlink(os.path.join(link_dir, entry)))
retval[target].add(entry)
except OSError:
continue
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_owners(self):
try:
retval = collections.defaultdict(set)
for path in glob.glob('/sys/block/*/slaves/*'):
elements = path.split('/')
device = elements[3]
target = elements[5]
retval[target].add(device)
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_links(self):
return {
'ids': self.get_device_links('/dev/disk/by-id'),
'uuids': self.get_device_links('/dev/disk/by-uuid'),
'labels': self.get_device_links('/dev/disk/by-label'),
'masters': self.get_all_device_owners(),
}
def get_holders(self, block_dev_dict, sysdir):
block_dev_dict['holders'] = []
if os.path.isdir(sysdir + "/holders"):
for folder in os.listdir(sysdir + "/holders"):
if not folder.startswith("dm-"):
continue
name = get_file_content(sysdir + "/holders/" + folder + "/dm/name")
if name:
block_dev_dict['holders'].append(name)
else:
block_dev_dict['holders'].append(folder)
def _get_sg_inq_serial(self, sg_inq, block):
device = "/dev/%s" % (block)
rc, drivedata, err = self.module.run_command([sg_inq, device])
if rc == 0:
serial = re.search(r"(?:Unit serial|Serial) number:\s+(\w+)", drivedata)
if serial:
return serial.group(1)
def get_device_facts(self):
device_facts = {}
device_facts['devices'] = {}
lspci = self.module.get_bin_path('lspci')
if lspci:
rc, pcidata, err = self.module.run_command([lspci, '-D'], errors='surrogate_then_replace')
else:
pcidata = None
try:
block_devs = os.listdir("/sys/block")
except OSError:
return device_facts
devs_wwn = {}
try:
devs_by_id = os.listdir("/dev/disk/by-id")
except OSError:
pass
else:
for link_name in devs_by_id:
if link_name.startswith("wwn-"):
try:
wwn_link = os.readlink(os.path.join("/dev/disk/by-id", link_name))
except OSError:
continue
devs_wwn[os.path.basename(wwn_link)] = link_name[4:]
links = self.get_all_device_links()
device_facts['device_links'] = links
for block in block_devs:
virtual = 1
sysfs_no_links = 0
try:
path = os.readlink(os.path.join("/sys/block/", block))
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.EINVAL:
path = block
sysfs_no_links = 1
else:
continue
sysdir = os.path.join("/sys/block", path)
if sysfs_no_links == 1:
for folder in os.listdir(sysdir):
if "device" in folder:
virtual = 0
break
d = {}
d['virtual'] = virtual
d['links'] = {}
for (link_type, link_values) in iteritems(links):
d['links'][link_type] = link_values.get(block, [])
diskname = os.path.basename(sysdir)
for key in ['vendor', 'model', 'sas_address', 'sas_device_handle']:
d[key] = get_file_content(sysdir + "/device/" + key)
sg_inq = self.module.get_bin_path('sg_inq')
# we can get NVMe device's serial number from /sys/block/<name>/device/serial
serial_path = "/sys/block/%s/device/serial" % (block)
if sg_inq:
serial = self._get_sg_inq_serial(sg_inq, block)
if serial:
d['serial'] = serial
else:
serial = get_file_content(serial_path)
if serial:
d['serial'] = serial
for key, test in [('removable', '/removable'),
('support_discard', '/queue/discard_granularity'),
]:
d[key] = get_file_content(sysdir + test)
if diskname in devs_wwn:
d['wwn'] = devs_wwn[diskname]
d['partitions'] = {}
for folder in os.listdir(sysdir):
m = re.search("(" + diskname + r"[p]?\d+)", folder)
if m:
part = {}
partname = m.group(1)
part_sysdir = sysdir + "/" + partname
part['links'] = {}
for (link_type, link_values) in iteritems(links):
part['links'][link_type] = link_values.get(partname, [])
part['start'] = get_file_content(part_sysdir + "/start", 0)
part['sectors'] = get_file_content(part_sysdir + "/size", 0)
part['sectorsize'] = get_file_content(part_sysdir + "/queue/logical_block_size")
if not part['sectorsize']:
part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size", 512)
part['size'] = bytes_to_human((float(part['sectors']) * 512.0))
part['uuid'] = get_partition_uuid(partname)
self.get_holders(part, part_sysdir)
d['partitions'][partname] = part
d['rotational'] = get_file_content(sysdir + "/queue/rotational")
d['scheduler_mode'] = ""
scheduler = get_file_content(sysdir + "/queue/scheduler")
if scheduler is not None:
m = re.match(r".*?(\[(.*)\])", scheduler)
if m:
d['scheduler_mode'] = m.group(2)
d['sectors'] = get_file_content(sysdir + "/size")
if not d['sectors']:
d['sectors'] = 0
d['sectorsize'] = get_file_content(sysdir + "/queue/logical_block_size")
if not d['sectorsize']:
d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size", 512)
d['size'] = bytes_to_human(float(d['sectors']) * 512.0)
d['host'] = ""
# domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7).
m = re.match(r".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir)
if m and pcidata:
pciid = m.group(1)
did = re.escape(pciid)
m = re.search("^" + did + r"\s(.*)$", pcidata, re.MULTILINE)
if m:
d['host'] = m.group(1)
self.get_holders(d, sysdir)
device_facts['devices'][diskname] = d
return device_facts
def get_uptime_facts(self):
uptime_facts = {}
uptime_file_content = get_file_content('/proc/uptime')
if uptime_file_content:
uptime_seconds_string = uptime_file_content.split(' ')[0]
uptime_facts['uptime_seconds'] = int(float(uptime_seconds_string))
return uptime_facts
def _find_mapper_device_name(self, dm_device):
dm_prefix = '/dev/dm-'
mapper_device = dm_device
if dm_device.startswith(dm_prefix):
dmsetup_cmd = self.module.get_bin_path('dmsetup', True)
mapper_prefix = '/dev/mapper/'
rc, dm_name, err = self.module.run_command("%s info -C --noheadings -o name %s" % (dmsetup_cmd, dm_device))
if rc == 0:
mapper_device = mapper_prefix + dm_name.rstrip()
return mapper_device
def get_lvm_facts(self):
""" Get LVM Facts if running as root and lvm utils are available """
lvm_facts = {'lvm': 'N/A'}
if os.getuid() == 0 and self.module.get_bin_path('vgs'):
lvm_util_options = '--noheadings --nosuffix --units g --separator ,'
vgs_path = self.module.get_bin_path('vgs')
# vgs fields: VG #PV #LV #SN Attr VSize VFree
vgs = {}
if vgs_path:
rc, vg_lines, err = self.module.run_command('%s %s' % (vgs_path, lvm_util_options))
for vg_line in vg_lines.splitlines():
items = vg_line.strip().split(',')
vgs[items[0]] = {'size_g': items[-2],
'free_g': items[-1],
'num_lvs': items[2],
'num_pvs': items[1]}
lvs_path = self.module.get_bin_path('lvs')
# lvs fields:
# LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lvs = {}
if lvs_path:
rc, lv_lines, err = self.module.run_command('%s %s' % (lvs_path, lvm_util_options))
for lv_line in lv_lines.splitlines():
items = lv_line.strip().split(',')
lvs[items[0]] = {'size_g': items[3], 'vg': items[1]}
pvs_path = self.module.get_bin_path('pvs')
# pvs fields: PV VG #Fmt #Attr PSize PFree
pvs = {}
if pvs_path:
rc, pv_lines, err = self.module.run_command('%s %s' % (pvs_path, lvm_util_options))
for pv_line in pv_lines.splitlines():
items = pv_line.strip().split(',')
pvs[self._find_mapper_device_name(items[0])] = {
'size_g': items[4],
'free_g': items[5],
'vg': items[1]}
lvm_facts['lvm'] = {'lvs': lvs, 'vgs': vgs, 'pvs': pvs}
return lvm_facts
class LinuxHardwareCollector(HardwareCollector):
_platform = 'Linux'
_fact_class = LinuxHardware
required_facts = set(['platform'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,478 |
Missing option passno and dump in ansible_mounts
|
### Summary
We want to migrate certain entries from /etc/fstab from one system to another. It an easy task, as `ansible.posix.mount` provides an interface to write all necessary data.
The most of the data is automatically discovered by Ansible and is available in `ansible_mounts`. Unfortunately, the options for `dump` and `passno` as defined in [/etc/fstab](https://man7.org/linux/man-pages/man5/fstab.5.html) are not available.
Please add the missing options to the `ansible_mounts` structure.
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.8]
config file = /home/carsten/ansible/ansible.cfg
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible --version
ansible [core 2.13.8]
config file = /home/carsten/ansible/ansible.cfg
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
$ ansible-config dump --only-changed -t all | cat
CACHE_PLUGIN(/home/carsten/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/carsten/ansible/ansible.cfg) = ./facts
CACHE_PLUGIN_TIMEOUT(/home/carsten/ansible/ansible.cfg) = 28800
DEFAULT_GATHERING(/home/carsten/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/carsten/ansible/ansible.cfg) = ['/home/carsten/ansible/hosts']
DEFAULT_ROLES_PATH(/home/carsten/ansible/ansible.cfg) = ['/home/carsten/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/carsten/ansible/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/carsten/ansible/ansible.cfg) = /home/carsten/.ansible_vault_password
HOST_KEY_CHECKING(/home/carsten/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/home/carsten/ansible/ansible.cfg) = auto_silent
CACHE:
=====
jsonfile:
________
_timeout(/home/carsten/ansible/ansible.cfg) = 28800
_uri(/home/carsten/ansible/ansible.cfg) = /home/carsten/ansible/facts
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/carsten/ansible/ansible.cfg) = False
ssh_args(/home/carsten/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=1200 -o ServerAliveInterval=180 -o StrictHostKeyChecking=no
ssh:
___
control_path(/home/carsten/ansible/ansible.cfg) = %(directory)s/%%C
host_key_checking(/home/carsten/ansible/ansible.cfg) = False
pipelining(/home/carsten/ansible/ansible.cfg) = True
ssh_args(/home/carsten/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=1200 -o ServerAliveInterval=180 -o StrictHostKeyChecking=no
```
### OS / Environment
$ cat /etc/os-release
NAME="SLES"
VERSION="15-SP4"
VERSION_ID="15.4"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP4"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp4"
DOCUMENTATION_URL="https://documentation.suse.com/"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test playbook
hosts: localhost
tasks:
- name: Show details for /home mount
debug:
var: item
with_items: "{{ ansible_mounts }}"
when: item.mount == '/home'
...
```
```
# grep home /etc/fstab
UUID=63ac199a-7e91-437c-851a-750ab634578e /home ext4 defaults 0 0
```
### Expected Results
```console
ok: [localhost] => (item={'block_available': 120061977, 'block_size': 4096, 'block_total': 128734272, 'block_used': 8672295, 'device': '/dev/sdb', 'fstype': 'ext4', 'inode_available': 32252577, 'inode_total': 32768000, 'inode_used': 515423, 'mount': '/home', 'options': 'rw,relatime', 'size_available': 491773857792, 'size_total': 527295578112, 'uuid': '63ac199a-7e91-437c-851a-750ab634578e'}) =>
ansible_loop_var: item
item:
block_available: 120061977
block_size: 4096
block_total: 128734272
block_used: 8672295
device: /dev/sdb
dump: 0 <---- new
fstype: ext4
inode_available: 32252577
inode_total: 32768000
inode_used: 515423
mount: /home
options: rw,relatime
passno: 0 <---- new
size_available: 491773857792
size_total: 527295578112
uuid: 63ac199a-7e91-437c-851a-750ab634578e
````
### Actual Results
```console
ok: [localhost] => (item={'block_available': 120061977, 'block_size': 4096, 'block_total': 128734272, 'block_used': 8672295, 'device': '/dev/sdb', 'fstype': 'ext4', 'inode_available': 32252577, 'inode_total': 32768000, 'inode_used': 515423, 'mount': '/home', 'options': 'rw,relatime', 'size_available': 491773857792, 'size_total': 527295578112, 'uuid': '63ac199a-7e91-437c-851a-750ab634578e'}) =>
ansible_loop_var: item
item:
block_available: 120061977
block_size: 4096
block_total: 128734272
block_used: 8672295
device: /dev/sdb
fstype: ext4
inode_available: 32252577
inode_total: 32768000
inode_used: 515423
mount: /home
options: rw,relatime
size_available: 491773857792
size_total: 527295578112
uuid: 63ac199a-7e91-437c-851a-750ab634578e
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80478
|
https://github.com/ansible/ansible/pull/81768
|
230f956e255ea1a98c57e947b341f89bf0b93abc
|
51f2ddd445e91765be4decd4f594adf781d15867
| 2023-04-11T14:24:47Z |
python
| 2023-09-26T15:12:03Z |
test/units/module_utils/facts/hardware/test_linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from units.compat import unittest
from units.compat.mock import Mock, patch
from ansible.module_utils.facts import timeout
from ansible.module_utils.facts.hardware import linux
from . linux_data import LSBLK_OUTPUT, LSBLK_OUTPUT_2, LSBLK_UUIDS, MTAB, MTAB_ENTRIES, BIND_MOUNTS, STATVFS_INFO, UDEVADM_UUID, UDEVADM_OUTPUT, SG_INQ_OUTPUTS
with open(os.path.join(os.path.dirname(__file__), '../fixtures/findmount_output.txt')) as f:
FINDMNT_OUTPUT = f.read()
GET_MOUNT_SIZE = {}
def mock_get_mount_size(mountpoint):
return STATVFS_INFO.get(mountpoint, {})
class TestFactsLinuxHardwareGetMountFacts(unittest.TestCase):
# FIXME: mock.patch instead
def setUp(self):
timeout.GATHER_TIMEOUT = 10
def tearDown(self):
timeout.GATHER_TIMEOUT = None
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._mtab_entries', return_value=MTAB_ENTRIES)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._find_bind_mounts', return_value=BIND_MOUNTS)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._lsblk_uuid', return_value=LSBLK_UUIDS)
@patch('ansible.module_utils.facts.hardware.linux.get_mount_size', side_effect=mock_get_mount_size)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._udevadm_uuid', return_value=UDEVADM_UUID)
def test_get_mount_facts(self,
mock_get_mount_size,
mock_lsblk_uuid,
mock_find_bind_mounts,
mock_mtab_entries,
mock_udevadm_uuid):
module = Mock()
# Returns a LinuxHardware-ish
lh = linux.LinuxHardware(module=module, load_on_init=False)
# Nothing returned, just self.facts modified as a side effect
mount_facts = lh.get_mount_facts()
self.assertIsInstance(mount_facts, dict)
self.assertIn('mounts', mount_facts)
self.assertIsInstance(mount_facts['mounts'], list)
self.assertIsInstance(mount_facts['mounts'][0], dict)
home_expected = {'block_available': 1001578731,
'block_size': 4096,
'block_total': 105871006,
'block_used': 5713133,
'device': '/dev/mapper/fedora_dhcp129--186-home',
'fstype': 'ext4',
'inode_available': 26860880,
'inode_total': 26902528,
'inode_used': 41648,
'mount': '/home',
'options': 'rw,seclabel,relatime,data=ordered',
'size_available': 410246647808,
'size_total': 433647640576,
'uuid': 'N/A'}
home_info = [x for x in mount_facts['mounts'] if x['mount'] == '/home'][0]
self.maxDiff = 4096
self.assertDictEqual(home_info, home_expected)
@patch('ansible.module_utils.facts.hardware.linux.get_file_content', return_value=MTAB)
def test_get_mtab_entries(self, mock_get_file_content):
module = Mock()
lh = linux.LinuxHardware(module=module, load_on_init=False)
mtab_entries = lh._mtab_entries()
self.assertIsInstance(mtab_entries, list)
self.assertIsInstance(mtab_entries[0], list)
self.assertEqual(len(mtab_entries), 38)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._run_findmnt', return_value=(0, FINDMNT_OUTPUT, ''))
def test_find_bind_mounts(self, mock_run_findmnt):
module = Mock()
lh = linux.LinuxHardware(module=module, load_on_init=False)
bind_mounts = lh._find_bind_mounts()
# If bind_mounts becomes another seq type, feel free to change
self.assertIsInstance(bind_mounts, set)
self.assertEqual(len(bind_mounts), 1)
self.assertIn('/not/a/real/bind_mount', bind_mounts)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._run_findmnt', return_value=(37, '', ''))
def test_find_bind_mounts_non_zero(self, mock_run_findmnt):
module = Mock()
lh = linux.LinuxHardware(module=module, load_on_init=False)
bind_mounts = lh._find_bind_mounts()
self.assertIsInstance(bind_mounts, set)
self.assertEqual(len(bind_mounts), 0)
def test_find_bind_mounts_no_findmnts(self):
module = Mock()
module.get_bin_path = Mock(return_value=None)
lh = linux.LinuxHardware(module=module, load_on_init=False)
bind_mounts = lh._find_bind_mounts()
self.assertIsInstance(bind_mounts, set)
self.assertEqual(len(bind_mounts), 0)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._run_lsblk', return_value=(0, LSBLK_OUTPUT, ''))
def test_lsblk_uuid(self, mock_run_lsblk):
module = Mock()
lh = linux.LinuxHardware(module=module, load_on_init=False)
lsblk_uuids = lh._lsblk_uuid()
self.assertIsInstance(lsblk_uuids, dict)
self.assertIn(b'/dev/loop9', lsblk_uuids)
self.assertIn(b'/dev/sda1', lsblk_uuids)
self.assertEqual(lsblk_uuids[b'/dev/sda1'], b'32caaec3-ef40-4691-a3b6-438c3f9bc1c0')
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._run_lsblk', return_value=(37, LSBLK_OUTPUT, ''))
def test_lsblk_uuid_non_zero(self, mock_run_lsblk):
module = Mock()
lh = linux.LinuxHardware(module=module, load_on_init=False)
lsblk_uuids = lh._lsblk_uuid()
self.assertIsInstance(lsblk_uuids, dict)
self.assertEqual(len(lsblk_uuids), 0)
def test_lsblk_uuid_no_lsblk(self):
module = Mock()
module.get_bin_path = Mock(return_value=None)
lh = linux.LinuxHardware(module=module, load_on_init=False)
lsblk_uuids = lh._lsblk_uuid()
self.assertIsInstance(lsblk_uuids, dict)
self.assertEqual(len(lsblk_uuids), 0)
@patch('ansible.module_utils.facts.hardware.linux.LinuxHardware._run_lsblk', return_value=(0, LSBLK_OUTPUT_2, ''))
def test_lsblk_uuid_dev_with_space_in_name(self, mock_run_lsblk):
module = Mock()
lh = linux.LinuxHardware(module=module, load_on_init=False)
lsblk_uuids = lh._lsblk_uuid()
self.assertIsInstance(lsblk_uuids, dict)
self.assertIn(b'/dev/loop0', lsblk_uuids)
self.assertIn(b'/dev/sda1', lsblk_uuids)
self.assertEqual(lsblk_uuids[b'/dev/mapper/an-example-mapper with a space in the name'], b'84639acb-013f-4d2f-9392-526a572b4373')
self.assertEqual(lsblk_uuids[b'/dev/sda1'], b'32caaec3-ef40-4691-a3b6-438c3f9bc1c0')
def test_udevadm_uuid(self):
module = Mock()
module.run_command = Mock(return_value=(0, UDEVADM_OUTPUT, '')) # (rc, out, err)
lh = linux.LinuxHardware(module=module, load_on_init=False)
udevadm_uuid = lh._udevadm_uuid('mock_device')
self.assertEqual(udevadm_uuid, '57b1a3e7-9019-4747-9809-7ec52bba9179')
def test_get_sg_inq_serial(self):
# Valid outputs
for sq_inq_output in SG_INQ_OUTPUTS:
module = Mock()
module.run_command = Mock(return_value=(0, sq_inq_output, '')) # (rc, out, err)
lh = linux.LinuxHardware(module=module, load_on_init=False)
sg_inq_serial = lh._get_sg_inq_serial('/usr/bin/sg_inq', 'nvme0n1')
self.assertEqual(sg_inq_serial, 'vol0123456789')
# Invalid output
module = Mock()
module.run_command = Mock(return_value=(0, '', '')) # (rc, out, err)
lh = linux.LinuxHardware(module=module, load_on_init=False)
sg_inq_serial = lh._get_sg_inq_serial('/usr/bin/sg_inq', 'nvme0n1')
self.assertEqual(sg_inq_serial, None)
# Non zero rc
module = Mock()
module.run_command = Mock(return_value=(42, '', 'Error 42')) # (rc, out, err)
lh = linux.LinuxHardware(module=module, load_on_init=False)
sg_inq_serial = lh._get_sg_inq_serial('/usr/bin/sg_inq', 'nvme0n1')
self.assertEqual(sg_inq_serial, None)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,917 |
Document difference between ansible.builtin.systemd and ansible.builtin.systemd_service
|
### Summary
The [list of builtin modules](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/index.html) shows two modules with similar names and identical descriptions:
> [systemd module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/systemd_module.html#ansible-collections-ansible-builtin-systemd-module) – Manage systemd units
> [systemd_service module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/systemd_service_module.html#ansible-collections-ansible-builtin-systemd-service-module) – Manage systemd units
My guess was that `systemd_service` would manage services running with systemd, and `systemd` would manage `systemd` itself. But the description is identical. The synopsis is identical. The arguments and examples appear identical.
As a user, I expect that if one module has two names, the documentation would state that one is an alias.
Looking at [the code](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/systemd.py) I see that `systemd` is symlinked to `systemd_service.py`.
Please modify the documentation to state that one is an alias of the other, and they are identical.
### Issue Type
Documentation Report
### Component Name
systemd_service
### Ansible Version
```console
N/A
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
For context, my objective is to create a service file, and then enable+start it. I was hoping that there would be one module to do that. But then I saw there are two similar ones, and I thought one would create the .service file, and the other would load+run it. But that doesn't appear to be the case either.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80917
|
https://github.com/ansible/ansible/pull/81803
|
cb8cb8936aee5049939fdac57f407a3a6c8b21bc
|
55f27a579ea36d8257398fec9ea1a9110816974d
| 2023-05-30T01:29:09Z |
python
| 2023-09-28T19:20:15Z |
lib/ansible/modules/systemd_service.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Brian Coca <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: systemd_service
author:
- Ansible Core Team
version_added: "2.2"
short_description: Manage systemd units
description:
- Controls systemd units (services, timers, and so on) on remote hosts.
options:
name:
description:
- Name of the unit. This parameter takes the name of exactly one unit to work with.
- When no extension is given, it is implied to a C(.service) as systemd.
- When using in a chroot environment you always need to specify the name of the unit with the extension. For example, C(crond.service).
type: str
aliases: [ service, unit ]
state:
description:
- V(started)/V(stopped) are idempotent actions that will not run commands unless necessary.
V(restarted) will always bounce the unit.
V(reloaded) will always reload and if the service is not running at the moment of the reload, it is started.
type: str
choices: [ reloaded, restarted, started, stopped ]
enabled:
description:
- Whether the unit should start on boot. B(At least one of state and enabled are required.)
type: bool
force:
description:
- Whether to override existing symlinks.
type: bool
version_added: 2.6
masked:
description:
- Whether the unit should be masked or not, a masked unit is impossible to start.
type: bool
daemon_reload:
description:
- Run daemon-reload before doing any other operations, to make sure systemd has read any changes.
- When set to V(true), runs daemon-reload even if the module does not start or stop anything.
type: bool
default: no
aliases: [ daemon-reload ]
daemon_reexec:
description:
- Run daemon_reexec command before doing any other operations, the systemd manager will serialize the manager state.
type: bool
default: no
aliases: [ daemon-reexec ]
version_added: "2.8"
scope:
description:
- Run systemctl within a given service manager scope, either as the default system scope V(system),
the current user's scope V(user), or the scope of all users V(global).
- "For systemd to work with 'user', the executing user must have its own instance of dbus started and accessible (systemd requirement)."
- "The user dbus process is normally started during normal login, but not during the run of Ansible tasks.
Otherwise you will probably get a 'Failed to connect to bus: no such file or directory' error."
- The user must have access, normally given via setting the C(XDG_RUNTIME_DIR) variable, see example below.
type: str
choices: [ system, user, global ]
default: system
version_added: "2.7"
no_block:
description:
- Do not synchronously wait for the requested operation to finish.
Enqueued job will continue without Ansible blocking on its completion.
type: bool
default: no
version_added: "2.3"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- Since 2.4, one of the following options is required O(state), O(enabled), O(masked), O(daemon_reload), (O(daemon_reexec) since 2.8),
and all except O(daemon_reload) and (O(daemon_reexec) since 2.8) also require O(name).
- Before 2.4 you always required O(name).
- Globs are not supported in name, i.e C(postgres*.service).
- The service names might vary by specific OS/distribution
- The order of execution when having multiple properties is to first enable/disable, then mask/unmask and then deal with service state.
It has been reported that systemctl can behave differently depending on the order of operations if you do the same manually.
requirements:
- A system managed by systemd.
'''
EXAMPLES = '''
- name: Make sure a service unit is running
ansible.builtin.systemd_service:
state: started
name: httpd
- name: Stop service cron on debian, if running
ansible.builtin.systemd_service:
name: cron
state: stopped
- name: Restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
ansible.builtin.systemd_service:
state: restarted
daemon_reload: true
name: crond
- name: Reload service httpd, in all cases
ansible.builtin.systemd_service:
name: httpd.service
state: reloaded
- name: Enable service httpd and ensure it is not masked
ansible.builtin.systemd_service:
name: httpd
enabled: true
masked: no
- name: Enable a timer unit for dnf-automatic
ansible.builtin.systemd_service:
name: dnf-automatic.timer
state: started
enabled: true
- name: Just force systemd to reread configs (2.4 and above)
ansible.builtin.systemd_service:
daemon_reload: true
- name: Just force systemd to re-execute itself (2.8 and above)
ansible.builtin.systemd_service:
daemon_reexec: true
- name: Run a user service when XDG_RUNTIME_DIR is not set on remote login
ansible.builtin.systemd_service:
name: myservice
state: started
scope: user
environment:
XDG_RUNTIME_DIR: "/run/user/{{ myuid }}"
'''
RETURN = '''
status:
description: A dictionary with the key=value pairs returned from C(systemctl show).
returned: success
type: dict
sample: {
"ActiveEnterTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ActiveEnterTimestampMonotonic": "8135942",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "active",
"After": "auditd.service systemd-user-sessions.service time-sync.target systemd-journald.socket basic.target system.slice",
"AllowIsolate": "no",
"Before": "shutdown.target multi-user.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "1000",
"CPUAccounting": "no",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "1024",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes",
"ConditionTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ConditionTimestampMonotonic": "7902742",
"Conflicts": "shutdown.target",
"ControlGroup": "/system.slice/crond.service",
"ControlPID": "0",
"DefaultDependencies": "yes",
"Delegate": "no",
"Description": "Command Scheduler",
"DevicePolicy": "auto",
"EnvironmentFile": "/etc/sysconfig/crond (ignore_errors=no)",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "595",
"ExecMainStartTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ExecMainStartTimestampMonotonic": "8134990",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/crond ; argv[]=/usr/sbin/crond -n $CRONDARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FragmentPath": "/usr/lib/systemd/system/crond.service",
"GuessMainPID": "yes",
"IOScheduling": "0",
"Id": "crond.service",
"IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"InactiveExitTimestampMonotonic": "8135942",
"JobTimeoutUSec": "0",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "18446744073709551615",
"LimitCORE": "18446744073709551615",
"LimitCPU": "18446744073709551615",
"LimitDATA": "18446744073709551615",
"LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615",
"LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200",
"LimitNICE": "0",
"LimitNOFILE": "4096",
"LimitNPROC": "3902",
"LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0",
"LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "3902",
"LimitSTACK": "18446744073709551615",
"LoadState": "loaded",
"MainPID": "595",
"MemoryAccounting": "no",
"MemoryLimit": "18446744073709551615",
"MountFlags": "0",
"Names": "crond.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "none",
"OOMScoreAdjust": "0",
"OnFailureIsolate": "no",
"PermissionsStartOnly": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"RemainAfterExit": "no",
"Requires": "basic.target",
"Restart": "no",
"RestartUSec": "100ms",
"Result": "success",
"RootDirectoryStartOnly": "no",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitInterval": "10000000",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "running",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "simple",
"UMask": "0022",
"UnitFileState": "enabled",
"WantedBy": "multi-user.target",
"Wants": "system.slice",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "0",
}
''' # NOQA
import os
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.facts.system.chroot import is_chroot
from ansible.module_utils.service import sysv_exists, sysv_is_enabled, fail_if_missing
from ansible.module_utils.common.text.converters import to_native
def is_running_service(service_status):
return service_status['ActiveState'] in set(['active', 'activating'])
def is_deactivating_service(service_status):
return service_status['ActiveState'] in set(['deactivating'])
def request_was_ignored(out):
return '=' not in out and ('ignoring request' in out or 'ignoring command' in out)
def parse_systemctl_show(lines):
# The output of 'systemctl show' can contain values that span multiple lines. At first glance it
# appears that such values are always surrounded by {}, so the previous version of this code
# assumed that any value starting with { was a multi-line value; it would then consume lines
# until it saw a line that ended with }. However, it is possible to have a single-line value
# that starts with { but does not end with } (this could happen in the value for Description=,
# for example), and the previous version of this code would then consume all remaining lines as
# part of that value. Cryptically, this would lead to Ansible reporting that the service file
# couldn't be found.
#
# To avoid this issue, the following code only accepts multi-line values for keys whose names
# start with Exec (e.g., ExecStart=), since these are the only keys whose values are known to
# span multiple lines.
parsed = {}
multival = []
k = None
for line in lines:
if k is None:
if '=' in line:
k, v = line.split('=', 1)
if k.startswith('Exec') and v.lstrip().startswith('{'):
if not v.rstrip().endswith('}'):
multival.append(v)
continue
parsed[k] = v.strip()
k = None
else:
multival.append(line)
if line.rstrip().endswith('}'):
parsed[k] = '\n'.join(multival).strip()
multival = []
k = None
return parsed
# ===========================================
# Main control flow
def main():
# initialize
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', aliases=['service', 'unit']),
state=dict(type='str', choices=['reloaded', 'restarted', 'started', 'stopped']),
enabled=dict(type='bool'),
force=dict(type='bool'),
masked=dict(type='bool'),
daemon_reload=dict(type='bool', default=False, aliases=['daemon-reload']),
daemon_reexec=dict(type='bool', default=False, aliases=['daemon-reexec']),
scope=dict(type='str', default='system', choices=['system', 'user', 'global']),
no_block=dict(type='bool', default=False),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled', 'masked', 'daemon_reload', 'daemon_reexec']],
required_by=dict(
state=('name', ),
enabled=('name', ),
masked=('name', ),
),
)
unit = module.params['name']
if unit is not None:
for globpattern in (r"*", r"?", r"["):
if globpattern in unit:
module.fail_json(msg="This module does not currently support using glob patterns, found '%s' in service name: %s" % (globpattern, unit))
systemctl = module.get_bin_path('systemctl', True)
if os.getenv('XDG_RUNTIME_DIR') is None:
os.environ['XDG_RUNTIME_DIR'] = '/run/user/%s' % os.geteuid()
# Set CLI options depending on params
# if scope is 'system' or None, we can ignore as there is no extra switch.
# The other choices match the corresponding switch
if module.params['scope'] != 'system':
systemctl += " --%s" % module.params['scope']
if module.params['no_block']:
systemctl += " --no-block"
if module.params['force']:
systemctl += " --force"
rc = 0
out = err = ''
result = dict(
name=unit,
changed=False,
status=dict(),
)
# Run daemon-reload first, if requested
if module.params['daemon_reload'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reload" % (systemctl))
if rc != 0:
if is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn('daemon-reload failed, but target is a chroot or systemd is offline. Continuing. Error was: %d / %s' % (rc, err))
else:
module.fail_json(msg='failure %d during daemon-reload: %s' % (rc, err))
# Run daemon-reexec
if module.params['daemon_reexec'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reexec" % (systemctl))
if rc != 0:
if is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn('daemon-reexec failed, but target is a chroot or systemd is offline. Continuing. Error was: %d / %s' % (rc, err))
else:
module.fail_json(msg='failure %d during daemon-reexec: %s' % (rc, err))
if unit:
found = False
is_initd = sysv_exists(unit)
is_systemd = False
# check service data, cannot error out on rc as it changes across versions, assume not found
(rc, out, err) = module.run_command("%s show '%s'" % (systemctl, unit))
if rc == 0 and not (request_was_ignored(out) or request_was_ignored(err)):
# load return of systemctl show into dictionary for easy access and return
if out:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
is_systemd = 'LoadState' in result['status'] and result['status']['LoadState'] != 'not-found'
is_masked = 'LoadState' in result['status'] and result['status']['LoadState'] == 'masked'
# Check for loading error
if is_systemd and not is_masked and 'LoadError' in result['status']:
module.fail_json(msg="Error loading unit file '%s': %s" % (unit, result['status']['LoadError']))
# Workaround for https://github.com/ansible/ansible/issues/71528
elif err and rc == 1 and 'Failed to parse bus message' in err:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
unit_base, sep, suffix = unit.partition('@')
unit_search = '{unit_base}{sep}'.format(unit_base=unit_base, sep=sep)
(rc, out, err) = module.run_command("{systemctl} list-unit-files '{unit_search}*'".format(systemctl=systemctl, unit_search=unit_search))
is_systemd = unit_search in out
(rc, out, err) = module.run_command("{systemctl} is-active '{unit}'".format(systemctl=systemctl, unit=unit))
result['status']['ActiveState'] = out.rstrip('\n')
else:
# list taken from man systemctl(1) for systemd 244
valid_enabled_states = [
"enabled",
"enabled-runtime",
"linked",
"linked-runtime",
"masked",
"masked-runtime",
"static",
"indirect",
"disabled",
"generated",
"transient"]
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
if out.strip() in valid_enabled_states:
is_systemd = True
else:
# fallback list-unit-files as show does not work on some systems (chroot)
# not used as primary as it skips some services (like those using init.d) and requires .service/etc notation
(rc, out, err) = module.run_command("%s list-unit-files '%s'" % (systemctl, unit))
if rc == 0:
is_systemd = True
else:
# Check for systemctl command
module.run_command(systemctl, check_rc=True)
# Does service exist?
found = is_systemd or is_initd
if is_initd and not is_systemd:
module.warn('The service (%s) is actually an init script but the system is managed by systemd' % unit)
# mask/unmask the service, if requested, can operate on services before they are installed
if module.params['masked'] is not None:
# state is not masked unless systemd affirms otherwise
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
masked = out.strip() == "masked"
if masked != module.params['masked']:
result['changed'] = True
if module.params['masked']:
action = 'mask'
else:
action = 'unmask'
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
# some versions of system CAN mask/unmask non existing services, we only fail on missing if they don't
fail_if_missing(module, found, unit, msg='host')
# Enable/disable service startup at boot if requested
if module.params['enabled'] is not None:
if module.params['enabled']:
action = 'enable'
else:
action = 'disable'
fail_if_missing(module, found, unit, msg='host')
# do we need to enable the service?
enabled = False
(rc, out, err) = module.run_command("%s is-enabled '%s' -l" % (systemctl, unit))
# check systemctl result or if it is a init script
if rc == 0:
enabled = True
# Check if the service is indirect or alias and if out contains exactly 1 line of string 'indirect'/ 'alias' it's disabled
if out.splitlines() == ["indirect"] or out.splitlines() == ["alias"]:
enabled = False
elif rc == 1:
# if not a user or global user service and both init script and unit file exist stdout should have enabled/disabled, otherwise use rc entries
if module.params['scope'] == 'system' and \
is_initd and \
not out.strip().endswith('disabled') and \
sysv_is_enabled(unit):
enabled = True
# default to current state
result['enabled'] = enabled
# Change enable/disable if needed
if enabled != module.params['enabled']:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, out + err))
result['enabled'] = not enabled
# set service state if requested
if module.params['state'] is not None:
fail_if_missing(module, found, unit, msg="host")
# default to desired state
result['state'] = module.params['state']
# What is current service state?
if 'ActiveState' in result['status']:
action = None
if module.params['state'] == 'started':
if not is_running_service(result['status']):
action = 'start'
elif module.params['state'] == 'stopped':
if is_running_service(result['status']) or is_deactivating_service(result['status']):
action = 'stop'
else:
if not is_running_service(result['status']):
action = 'start'
else:
action = module.params['state'][:-2] # remove 'ed' from restarted/reloaded
result['state'] = 'started'
if action:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
# check for chroot
elif is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn("Target is a chroot or systemd is offline. This can lead to false positives or prevent the init system tools from working.")
else:
# this should not happen?
module.fail_json(msg="Service is in unknown state", status=result['status'])
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,561 |
Deprecation warning fails to state what is actually deprecated
|
### Summary
In my Ansible runs, I started seeing the following deprecation warning:
```
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
The question is, _what_ should be replaced with "ansible.utils.ipmath"?
### Issue Type
Bug Report
### Component Name
deprecation warning
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/etc/ansible/library/usd']
ansible python module location = /var/home/ansiblectl/.local/lib/python3.9/site-packages/ansible
ansible collection location = /etc/ansible/collections:/var/home/ansiblectl/.ansible/collections:/usr/share/ansible/collections
executable location = /var/home/ansiblectl/.local/bin/ansible
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = yaml
CACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /var/home/ansiblectl/.ansible_cache
CACHE_PLUGIN_PREFIX(/etc/ansible/ansible.cfg) = usd-
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 1800
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/etc/ansible/collections', '/var/home/ansiblectl/.ansible/collections', '/usr/share/ansible/collections']
DEFAULT_FORCE_HANDLERS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/linux.yaml', '/etc/ansible/inventory/satellite.foreman.yml', '/etc/ansible/inventory/inventory.foreman.yml']
DEFAULT_JINJA2_EXTENSIONS(/etc/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = <redacted>
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/library/usd']
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = svc-ansible
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles']
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 300
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = <redacted>
DISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['host_list', 'yaml', 'script', 'ini', 'theforeman.foreman.foreman']
BECOME:
======
sudo:
____
become_flags(/etc/ansible/ansible.cfg) = -H -E -S -n
CACHE:
=====
jsonfile:
________
_prefix(/etc/ansible/ansible.cfg) = usd-
_timeout(/etc/ansible/ansible.cfg) = 1800
_uri(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /var/home/ansiblectl/.ansible_cache
CALLBACK:
========
default:
_______
display_skipped_hosts(/etc/ansible/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
remote_user(/etc/ansible/ansible.cfg) = svc-ansible
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o IdentityAgent=none
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
pipelining(/etc/ansible/ansible.cfg) = True
reconnection_retries(/etc/ansible/ansible.cfg) = 3
remote_user(/etc/ansible/ansible.cfg) = svc-ansible
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o IdentityAgent=none
timeout(/etc/ansible/ansible.cfg) = 300
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Rhel 8.7, python 3.6.8, Ansible installed with pip.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Unknown - that's the problem!
### Expected Results
I expect the deprecation warning to indicate what statement is being deprecated similar to the modified deprecation warning below.
```
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead **_of <INSERT MODULE OR CODE HERE>_**. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
### Actual Results
```console
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80561
|
https://github.com/ansible/ansible/pull/81719
|
27bbff7c22663543bab0bf096f0b0a857ac4bcf7
|
4d4c50f856bf844ab47a08a2f64fc9697916b50f
| 2023-04-19T03:24:53Z |
python
| 2023-09-29T18:19:16Z |
changelogs/fragments/80561.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,561 |
Deprecation warning fails to state what is actually deprecated
|
### Summary
In my Ansible runs, I started seeing the following deprecation warning:
```
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
The question is, _what_ should be replaced with "ansible.utils.ipmath"?
### Issue Type
Bug Report
### Component Name
deprecation warning
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/etc/ansible/library/usd']
ansible python module location = /var/home/ansiblectl/.local/lib/python3.9/site-packages/ansible
ansible collection location = /etc/ansible/collections:/var/home/ansiblectl/.ansible/collections:/usr/share/ansible/collections
executable location = /var/home/ansiblectl/.local/bin/ansible
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = yaml
CACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /var/home/ansiblectl/.ansible_cache
CACHE_PLUGIN_PREFIX(/etc/ansible/ansible.cfg) = usd-
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 1800
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/etc/ansible/collections', '/var/home/ansiblectl/.ansible/collections', '/usr/share/ansible/collections']
DEFAULT_FORCE_HANDLERS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/linux.yaml', '/etc/ansible/inventory/satellite.foreman.yml', '/etc/ansible/inventory/inventory.foreman.yml']
DEFAULT_JINJA2_EXTENSIONS(/etc/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = <redacted>
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/library/usd']
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = svc-ansible
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles']
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 300
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = <redacted>
DISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['host_list', 'yaml', 'script', 'ini', 'theforeman.foreman.foreman']
BECOME:
======
sudo:
____
become_flags(/etc/ansible/ansible.cfg) = -H -E -S -n
CACHE:
=====
jsonfile:
________
_prefix(/etc/ansible/ansible.cfg) = usd-
_timeout(/etc/ansible/ansible.cfg) = 1800
_uri(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /var/home/ansiblectl/.ansible_cache
CALLBACK:
========
default:
_______
display_skipped_hosts(/etc/ansible/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
remote_user(/etc/ansible/ansible.cfg) = svc-ansible
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o IdentityAgent=none
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
pipelining(/etc/ansible/ansible.cfg) = True
reconnection_retries(/etc/ansible/ansible.cfg) = 3
remote_user(/etc/ansible/ansible.cfg) = svc-ansible
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o IdentityAgent=none
timeout(/etc/ansible/ansible.cfg) = 300
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Rhel 8.7, python 3.6.8, Ansible installed with pip.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Unknown - that's the problem!
### Expected Results
I expect the deprecation warning to indicate what statement is being deprecated similar to the modified deprecation warning below.
```
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead **_of <INSERT MODULE OR CODE HERE>_**. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
### Actual Results
```console
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80561
|
https://github.com/ansible/ansible/pull/81719
|
27bbff7c22663543bab0bf096f0b0a857ac4bcf7
|
4d4c50f856bf844ab47a08a2f64fc9697916b50f
| 2023-04-19T03:24:53Z |
python
| 2023-09-29T18:19:16Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import pkgutil
import sys
import warnings
from collections import defaultdict, namedtuple
from traceback import format_exc
import ansible.module_utils.compat.typing as t
from .filter import AnsibleJinja2Filter
from .test import AnsibleJinja2Test
from ansible import __version__ as ansible_version
from ansible import constants as C
from ansible.errors import AnsibleError, AnsiblePluginCircularRedirect, AnsiblePluginRemovedError, AnsibleCollectionUnsupportedVersionError
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native
from ansible.module_utils.compat.importlib import import_module
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder, _get_collection_metadata
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments
# TODO: take the packaging dep, or vendor SpecifierSet?
try:
from packaging.specifiers import SpecifierSet
from packaging.version import Version
except ImportError:
SpecifierSet = None # type: ignore[misc]
Version = None # type: ignore[misc]
import importlib.util
_PLUGIN_FILTERS = defaultdict(frozenset) # type: t.DefaultDict[str, frozenset]
display = Display()
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = os.path.expanduser(to_bytes(path, errors='surrogate_or_strict'))
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginPathContext(object):
def __init__(self, path, internal):
self.path = path
self.internal = internal
class PluginLoadContext(object):
def __init__(self):
self.original_name = None
self.redirect_list = []
self.error_list = []
self.import_error_list = []
self.load_attempts = []
self.pending_redirect = None
self.exit_reason = None
self.plugin_resolved_path = None
self.plugin_resolved_name = None
self.plugin_resolved_collection = None # empty string for resolved plugins from user-supplied paths
self.deprecated = False
self.removal_date = None
self.removal_version = None
self.deprecation_warnings = []
self.resolved = False
self._resolved_fqcn = None
self.action_plugin = None
@property
def resolved_fqcn(self):
if not self.resolved:
return
if not self._resolved_fqcn:
final_plugin = self.redirect_list[-1]
if AnsibleCollectionRef.is_valid_fqcr(final_plugin) and final_plugin.startswith('ansible.legacy.'):
final_plugin = final_plugin.split('ansible.legacy.')[-1]
if self.plugin_resolved_collection and not AnsibleCollectionRef.is_valid_fqcr(final_plugin):
final_plugin = self.plugin_resolved_collection + '.' + final_plugin
self._resolved_fqcn = final_plugin
return self._resolved_fqcn
def record_deprecation(self, name, deprecation, collection_name):
if not deprecation:
return self
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
warning_text = deprecation.get('warning_text', None) or ''
removal_date = deprecation.get('removal_date', None)
removal_version = deprecation.get('removal_version', None)
# If both removal_date and removal_version are specified, use removal_date
if removal_date is not None:
removal_version = None
warning_text = '{0} has been deprecated.{1}{2}'.format(name, ' ' if warning_text else '', warning_text)
display.deprecated(warning_text, date=removal_date, version=removal_version, collection_name=collection_name)
self.deprecated = True
if removal_date:
self.removal_date = removal_date
if removal_version:
self.removal_version = removal_version
self.deprecation_warnings.append(warning_text)
return self
def resolve(self, resolved_name, resolved_path, resolved_collection, exit_reason, action_plugin):
self.pending_redirect = None
self.plugin_resolved_name = resolved_name
self.plugin_resolved_path = resolved_path
self.plugin_resolved_collection = resolved_collection
self.exit_reason = exit_reason
self.resolved = True
self.action_plugin = action_plugin
return self
def redirect(self, redirect_name):
self.pending_redirect = redirect_name
self.exit_reason = 'pending redirect resolution from {0} to {1}'.format(self.original_name, redirect_name)
self.resolved = False
return self
def nope(self, exit_reason):
self.pending_redirect = None
self.exit_reason = exit_reason
self.resolved = False
return self
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
@property
def type(self):
return AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
def __repr__(self):
return 'PluginLoader(type={0})'.format(self.type)
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = to_text(os.path.dirname(m.__file__), errors='surrogate_or_strict')
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths_with_context(self, subdirs=True):
''' Return a list of PluginPathContext objects to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = [PluginPathContext(p, False) for p in self._extra_dirs]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.abspath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
c = to_text(c, errors='surrogate_or_strict')
if os.path.isdir(c) and c not in ret:
ret.append(PluginPathContext(c, False))
path = to_text(path, errors='surrogate_or_strict')
if path not in ret:
ret.append(PluginPathContext(path, False))
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend([PluginPathContext(p, True) for p in self._get_package_paths(subdirs=subdirs)])
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
#
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
ret.sort(key=lambda p: p.path.endswith('/windows'))
# cache and return the result
self._paths = ret
return ret
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
paths_with_context = self._get_paths_with_context(subdirs=subdirs)
return [path_with_context.path for path_with_context in paths_with_context]
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS and not C.config.has_configuration_definition(type_name, name):
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
# TODO: allow configurable plugins to use sidecar
# if not dstring:
# filename, cn = find_plugin_docfile( name, type_name, self, [os.path.dirname(path)], C.YAML_DOC_EXTENSIONS)
# # TODO: dstring = AnsibleLoader(, file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader, is_module=(type_name == 'module'))
if 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _query_collection_routing_meta(self, acr, plugin_type, extension=None):
collection_pkg = import_module(acr.n_python_collection_package_name)
if not collection_pkg:
return None
# FIXME: shouldn't need this...
try:
# force any type-specific metadata postprocessing to occur
import_module(acr.n_python_collection_package_name + '.plugins.{0}'.format(plugin_type))
except ImportError:
pass
# this will be created by the collection PEP302 loader
collection_meta = getattr(collection_pkg, '_collection_meta', None)
if not collection_meta:
return None
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
if acr.subdirs:
subdir_qualified_resource = '.'.join([acr.subdirs, acr.resource])
else:
subdir_qualified_resource = acr.resource
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource + extension, None)
if not entry:
# try for extension-agnostic entry
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource, None)
return entry
def _find_fq_plugin(self, fq_name, extension, plugin_load_context, ignore_deprecated=False):
"""Search builtin paths to find a plugin. No external paths are searched,
meaning plugins inside roles inside collections will be ignored.
"""
plugin_load_context.resolved = False
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
# check collection metadata to see if any special handling is required for this plugin
routing_metadata = self._query_collection_routing_meta(acr, plugin_type, extension=extension)
action_plugin = None
# TODO: factor this into a wrapper method
if routing_metadata:
deprecation = routing_metadata.get('deprecation', None)
# this will no-op if there's no deprecation metadata for this plugin
if not ignore_deprecated:
plugin_load_context.record_deprecation(fq_name, deprecation, acr.collection)
tombstone = routing_metadata.get('tombstone', None)
# FIXME: clean up text gen
if tombstone:
removal_date = tombstone.get('removal_date')
removal_version = tombstone.get('removal_version')
warning_text = tombstone.get('warning_text') or ''
warning_text = '{0} has been removed.{1}{2}'.format(fq_name, ' ' if warning_text else '', warning_text)
removed_msg = display.get_deprecation_message(msg=warning_text, version=removal_version,
date=removal_date, removed=True,
collection_name=acr.collection)
plugin_load_context.removal_date = removal_date
plugin_load_context.removal_version = removal_version
plugin_load_context.resolved = True
plugin_load_context.exit_reason = removed_msg
raise AnsiblePluginRemovedError(removed_msg, plugin_load_context=plugin_load_context)
redirect = routing_metadata.get('redirect', None)
if redirect:
# Prevent mystery redirects that would be determined by the collections keyword
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {fq_name}: {redirect}. "
"Redirects must use fully qualified collection names."
)
# FIXME: remove once this is covered in debug or whatever
display.vv("redirecting (type: {0}) {1} to {2}".format(plugin_type, fq_name, redirect))
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
if fq_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(fq_name)
return plugin_load_context.redirect(redirect)
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
if self.type == 'modules':
action_plugin = routing_metadata.get('action_plugin')
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
return plugin_load_context.nope('Python package {0} not found'.format(acr.n_python_package_name))
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return plugin_load_context.resolve(
full_name, to_text(n_resource_path), acr.collection, 'found exact match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
if extension:
# the request was extension-specific, don't try for an extensionless match
return plugin_load_context.nope('no match for {0} in {1}'.format(to_text(n_resource), acr.collection))
# look for any matching extension in the package location (sans filter)
found_files = [f
for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*')
if os.path.isfile(f) and not f.endswith(C.MODULE_IGNORE_EXTS)]
if not found_files:
return plugin_load_context.nope('failed fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
found_files = sorted(found_files) # sort to ensure deterministic results, with the shortest match first
if len(found_files) > 1:
display.debug('Found several possible candidates for the plugin but using first: %s' % ','.join(found_files))
return plugin_load_context.resolve(
full_name, to_text(found_files[0]), acr.collection,
'found fuzzy extension match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
result = self.find_plugin_with_context(name, mod_type, ignore_deprecated, check_aliases, collection_list)
if result.resolved and result.plugin_resolved_path:
return result.plugin_resolved_path
return None
def find_plugin_with_context(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name, returning contextual info about the load, recursively resolving redirection '''
plugin_load_context = PluginLoadContext()
plugin_load_context.original_name = name
while True:
result = self._resolve_plugin_step(name, mod_type, ignore_deprecated, check_aliases, collection_list, plugin_load_context=plugin_load_context)
if result.pending_redirect:
if result.pending_redirect in result.redirect_list:
raise AnsiblePluginCircularRedirect('plugin redirect loop resolving {0} (path: {1})'.format(result.original_name, result.redirect_list))
name = result.pending_redirect
result.pending_redirect = None
plugin_load_context = result
else:
break
# TODO: smuggle these to the controller when we're in a worker, reduce noise from normal things like missing plugin packages during collection search
if plugin_load_context.error_list:
display.warning("errors were encountered during the plugin load for {0}:\n{1}".format(name, plugin_load_context.error_list))
# TODO: display/return import_error_list? Only useful for forensics...
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# for dw in plugin_load_context.deprecation_warnings:
# # TODO: need to smuggle these to the controller if we're in a worker context
# display.warning('[DEPRECATION WARNING] ' + dw)
return plugin_load_context
# FIXME: name bikeshed
def _resolve_plugin_step(self, name, mod_type='', ignore_deprecated=False,
check_aliases=False, collection_list=None, plugin_load_context=PluginLoadContext()):
if not plugin_load_context:
raise ValueError('A PluginLoadContext is required')
plugin_load_context.redirect_list.append(name)
plugin_load_context.resolved = False
if name in _PLUGIN_FILTERS[self.package]:
plugin_load_context.exit_reason = '{0} matched a defined plugin filter'.format(name)
return plugin_load_context
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
for candidate_name in candidates:
try:
plugin_load_context.load_attempts.append(candidate_name)
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
plugin_load_context = self._find_plugin_legacy(name.removeprefix('ansible.legacy.'),
plugin_load_context, ignore_deprecated, check_aliases, suffix)
else:
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
plugin_load_context = self._find_fq_plugin(candidate_name, suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# collections: ['ns.coll']
if plugin_load_context.resolved and candidate_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(candidate_name)
if plugin_load_context.resolved or plugin_load_context.pending_redirect: # if we got an answer or need to chase down a redirect, return
return plugin_load_context
except (AnsiblePluginRemovedError, AnsiblePluginCircularRedirect, AnsibleCollectionUnsupportedVersionError):
# these are generally fatal, let them fly
raise
except ImportError as ie:
plugin_load_context.import_error_list.append(ie)
except Exception as ex:
# FIXME: keep actual errors, not just assembled messages
plugin_load_context.error_list.append(to_native(ex))
if plugin_load_context.error_list:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(plugin_load_context.error_list)))
plugin_load_context.exit_reason = 'no matches found for {0}'.format(name)
return plugin_load_context
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, plugin_load_context, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, plugin_load_context, ignore_deprecated=False, check_aliases=False, suffix=None):
"""Search library and various *_plugins paths in order to find the file.
This was behavior prior to the existence of collections.
"""
plugin_load_context.resolved = False
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = ('ansible.builtin.' + name if path_with_context.internal else name)
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator.
# We can use _get_paths_with_context() since add_directory() forces a cache refresh.
for path_with_context in (p for p in self._get_paths_with_context() if p.path not in self._searched_paths and os.path.isdir(to_bytes(p.path))):
path = path_with_context.path
b_path = to_bytes(path)
display.debug('trying %s' % path)
plugin_load_context.load_attempts.append(path)
internal = path_with_context.internal
try:
full_paths = (os.path.join(b_path, f) for f in os.listdir(b_path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (to_native(f) for f in full_paths if os.path.isfile(f) and not f.endswith(b'__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.MODULE_IGNORE_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# everything downstream expects unicode
full_path = to_text(full_path, errors='surrogate_or_strict')
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = PluginPathContext(full_path, internal)
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = PluginPathContext(full_path, internal)
self._searched_paths.add(path)
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + name if path_with_context.internal else name
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
path_with_context = pull_cache[alias_name]
if not ignore_deprecated and not os.path.islink(path_with_context.path):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = alias_name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + alias_name if path_with_context.internal else alias_name
plugin_load_context.resolved = True
return plugin_load_context
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
candidate_fqcr = 'ansible.builtin.{0}'.format(name)
if '.' not in name and AnsibleCollectionRef.is_valid_fqcr(candidate_fqcr):
return self._find_fq_plugin(fq_name=candidate_fqcr, extension=suffix, plugin_load_context=plugin_load_context, ignore_deprecated=ignore_deprecated)
return plugin_load_context.nope('{0} is not eligible for last-chance resolution'.format(name))
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
# FIXME: this still has issues if the module was previously imported but not "cached",
# we should bypass this entire codepath for things that are directly importable
warnings.simplefilter("ignore", RuntimeWarning)
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
# mimic import machinery; make the module-being-loaded available in sys.modules during import
# and remove if there's a failure...
sys.modules[full_name] = module
try:
spec.loader.exec_module(module)
except Exception:
del sys.modules[full_name]
raise
return module
def _update_object(self, obj, name, path, redirected_names=None, resolved=None):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
setattr(obj, '_redirected_names', redirected_names or [])
names = []
if resolved:
names.append(resolved)
if redirected_names:
# reverse list so best name comes first
names.extend(redirected_names[::-1])
if not names:
raise AnsibleError(f"Missing FQCN for plugin source {name}")
setattr(obj, 'ansible_aliases', names)
setattr(obj, 'ansible_name', names[0])
def get(self, name, *args, **kwargs):
return self.get_with_context(name, *args, **kwargs).object
def get_with_context(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
plugin_load_context = self.find_plugin_with_context(name, collection_list=collection_list)
if not plugin_load_context.resolved or not plugin_load_context.plugin_resolved_path:
# FIXME: this is probably an error (eg removed plugin)
return get_with_context_result(None, plugin_load_context)
fq_name = plugin_load_context.resolved_fqcn
if '.' not in fq_name:
fq_name = '.'.join((plugin_load_context.plugin_resolved_collection, fq_name))
name = plugin_load_context.plugin_resolved_name
path = plugin_load_context.plugin_resolved_path
redirected_names = plugin_load_context.redirect_list or []
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
found_in_cache = False
self._load_config_defs(name, self._module_cache[path], path)
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return get_with_context_result(None, plugin_load_context)
if not issubclass(obj, plugin_class):
return get_with_context_result(None, plugin_load_context)
# FIXME: update this to use the load context
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
instance = object.__new__(obj)
self._update_object(instance, name, path, redirected_names, fq_name)
obj.__init__(instance, *args, **kwargs) # pylint: disable=unnecessary-dunder-call
obj = instance
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class or incomplete plugin, don't load
display.v('Returning not found on "%s" as it has unimplemented abstract methods; %s' % (name, to_native(e)))
return get_with_context_result(None, plugin_load_context)
raise
self._update_object(obj, name, path, redirected_names, fq_name)
return get_with_context_result(obj, plugin_load_context)
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type, in configured paths (no collections)
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
legacy_excluding_builtin = set()
for path_with_context in self._get_paths_with_context():
matches = glob.glob(to_native(os.path.join(path_with_context.path, "*.py")))
if not path_with_context.internal:
legacy_excluding_builtin.update(matches)
# we sort within each path, but keep path precedence from config
all_matches.extend(sorted(matches, key=os.path.basename))
loaded_modules = set()
for path in all_matches:
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename in _PLUGIN_FILTERS[self.package]:
display.debug("'%s' skipped due to a defined plugin filter" % basename)
continue
if basename == '__init__' or (basename == 'base' and self.package == 'ansible.plugins.cache'):
# cache has legacy 'base.py' file, which is wrapper for __init__.py
display.debug("'%s' skipped due to reserved name" % basename)
continue
if dedupe and basename in loaded_modules:
display.debug("'%s' skipped as duplicate" % basename)
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
if self.type in ('filter', 'test'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
try:
module = self._load_module_source(full_name, path)
except Exception as e:
display.warning("Skipping plugin (%s), cannot load: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
else:
module = self._module_cache[path]
self._load_config_defs(basename, module, path)
try:
obj = getattr(module, self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
if path in legacy_excluding_builtin:
fqcn = basename
else:
fqcn = f"ansible.builtin.{basename}"
self._update_object(obj, basename, path, resolved=fqcn)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
We need to do a few things differently in the base class because of file == plugin
assumptions and dedupe logic.
"""
def __init__(self, class_name, package, config, subdir, plugin_wrapper_type, aliases=None, required_base_class=None):
super(Jinja2Loader, self).__init__(class_name, package, config, subdir, aliases=aliases, required_base_class=required_base_class)
self._plugin_wrapper_type = plugin_wrapper_type
self._cached_non_collection_wrappers = {}
def _clear_caches(self):
super(Jinja2Loader, self)._clear_caches()
self._cached_non_collection_wrappers = {}
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
raise NotImplementedError('find_plugin is not supported on Jinja2Loader')
@property
def method_map_name(self):
return get_plugin_class(self.class_name) + 's'
def get_contained_plugins(self, collection, plugin_path, name):
plugins = []
full_name = '.'.join(['ansible_collections', collection, 'plugins', self.type, name])
try:
# use 'parent' loader class to find files, but cannot return this as it can contain multiple plugins per file
if plugin_path not in self._module_cache:
self._module_cache[plugin_path] = self._load_module_source(full_name, plugin_path)
module = self._module_cache[plugin_path]
obj = getattr(module, self.class_name)
except Exception as e:
raise KeyError('Failed to load %s for %s: %s' % (plugin_path, collection, to_native(e)))
plugin_impl = obj()
if plugin_impl is None:
raise KeyError('Could not find %s.%s' % (collection, name))
try:
method_map = getattr(plugin_impl, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning("Ignoring %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(plugin_path), e))
return plugins
for func_name, func in plugin_map:
fq_name = '.'.join((collection, func_name))
full = '.'.join((full_name, func_name))
plugin = self._plugin_wrapper_type(func)
if plugin in plugins:
continue
self._update_object(plugin, full, plugin_path, resolved=fq_name)
plugins.append(plugin)
return plugins
# FUTURE: now that the resulting plugins are closer, refactor base class method with some extra
# hooks so we can avoid all the duplicated plugin metadata logic, and also cache the collection results properly here
def get_with_context(self, name, *args, **kwargs):
# pop N/A kwargs to avoid passthrough to parent methods
kwargs.pop('class_only', False)
kwargs.pop('collection_list', None)
context = PluginLoadContext()
# avoid collection path for legacy
name = name.removeprefix('ansible.legacy.')
self._ensure_non_collection_wrappers(*args, **kwargs)
# check for stuff loaded via legacy/builtin paths first
if known_plugin := self._cached_non_collection_wrappers.get(name):
context.resolved = True
context.plugin_resolved_name = name
context.plugin_resolved_path = known_plugin._original_path
context.plugin_resolved_collection = 'ansible.builtin' if known_plugin.ansible_name.startswith('ansible.builtin.') else ''
context._resolved_fqcn = known_plugin.ansible_name
return get_with_context_result(known_plugin, context)
plugin = None
key, leaf_key = get_fqcr_and_name(name)
seen = set()
# follow the meta!
while True:
if key in seen:
raise AnsibleError('recursive collection redirect found for %r' % name, 0)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self.type)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
try:
ts = _get_collection_metadata(acr.collection)
except ValueError as e:
# no collection
raise KeyError('Invalid plugin FQCN ({0}): {1}'.format(key, to_native(e)))
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self.type, {}).get(leaf_key, {})
# check deprecations
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self.type, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
# check removal
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self.type, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
# check redirects
redirect = routing_entry.get('redirect', None)
if redirect:
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {acr.collection}.{acr.resource}: {redirect}. "
"Redirects must use fully qualified collection names."
)
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self.type, acr.collection, acr.resource, next_key))
key = next_key
else:
break
try:
pkg = import_module(acr.n_python_package_name)
except ImportError as e:
raise KeyError(to_native(e))
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
try:
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
# use 'parent' loader class to find files, but cannot return this as it can contain
# multiple plugins per file
plugin_impl = super(Jinja2Loader, self).get_with_context(module_name, *args, **kwargs)
method_map = getattr(plugin_impl.object, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning(f"Skipping {self.type} plugins in {module_name}'; an error occurred while loading: {e}")
continue
for func_name, func in plugin_map:
fq_name = '.'.join((parent_prefix, func_name))
src_name = f"ansible_collections.{acr.collection}.plugins.{self.type}.{acr.subdirs}.{func_name}"
# TODO: load anyways into CACHE so we only match each at end of loop
# the files themseves should already be cached by base class caching of modules(python)
if key in (func_name, fq_name):
plugin = self._plugin_wrapper_type(func)
if plugin:
context = plugin_impl.plugin_load_context
self._update_object(plugin, src_name, plugin_impl.object._original_path, resolved=fq_name)
# FIXME: once we start caching these results, we'll be missing functions that would have loaded later
break # go to next file as it can override if dupe (dont break both loops)
except AnsiblePluginRemovedError as apre:
raise AnsibleError(to_native(apre), 0, orig_exc=apre)
except (AnsibleError, KeyError):
raise
except Exception as ex:
display.warning('An unexpected error occurred during Jinja2 plugin loading: {0}'.format(to_native(ex)))
display.vvv('Unexpected error during Jinja2 plugin loading: {0}'.format(format_exc()))
raise AnsibleError(to_native(ex), 0, orig_exc=ex)
return get_with_context_result(plugin, context)
def all(self, *args, **kwargs):
kwargs.pop('_dedupe', None)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False) # basically ignored for test/filters since they are functions
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
self._ensure_non_collection_wrappers(*args, **kwargs)
if path_only:
yield from (w._original_path for w in self._cached_non_collection_wrappers.values())
else:
yield from (w for w in self._cached_non_collection_wrappers.values())
def _ensure_non_collection_wrappers(self, *args, **kwargs):
if self._cached_non_collection_wrappers:
return
# get plugins from files in configured paths (multiple in each)
for p_map in super(Jinja2Loader, self).all(*args, **kwargs):
is_builtin = p_map.ansible_name.startswith('ansible.builtin.')
# p_map is really object from file with class that holds multiple plugins
plugins_list = getattr(p_map, self.method_map_name)
try:
plugins = plugins_list()
except Exception as e:
display.vvvv("Skipping %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(p_map._original_path), e))
continue
for plugin_name in plugins.keys():
if '.' in plugin_name:
display.debug(f'{plugin_name} skipped in {p_map._original_path}; Jinja plugin short names may not contain "."')
continue
if plugin_name in _PLUGIN_FILTERS[self.package]:
display.debug("%s skipped due to a defined plugin filter" % plugin_name)
continue
# the plugin class returned by the loader may host multiple Jinja plugins, but we wrap each plugin in
# its own surrogate wrapper instance here to ease the bookkeeping...
wrapper = self._plugin_wrapper_type(plugins[plugin_name])
fqcn = plugin_name
collection = '.'.join(p_map.ansible_name.split('.')[:2]) if p_map.ansible_name.count('.') >= 2 else ''
if not plugin_name.startswith(collection):
fqcn = f"{collection}.{plugin_name}"
self._update_object(wrapper, plugin_name, p_map._original_path, resolved=fqcn)
target_names = {plugin_name, fqcn}
if is_builtin:
target_names.add(f'ansible.builtin.{plugin_name}')
for target_name in target_names:
if existing_plugin := self._cached_non_collection_wrappers.get(target_name):
display.debug(f'Jinja plugin {target_name} from {p_map._original_path} skipped; '
f'shadowed by plugin from {existing_plugin._original_path})')
continue
self._cached_non_collection_wrappers[target_name] = wrapper
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
def _load_plugin_filter():
filters = _PLUGIN_FILTERS
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
# Modules and action plugins share the same reject list since the difference between the
# two isn't visible to the users
if version == u'1.0':
if 'module_blacklist' in filter_data:
display.deprecated("'module_blacklist' is being removed in favor of 'module_rejectlist'", version='2.18')
if 'module_rejectlist' not in filter_data:
filter_data['module_rejectlist'] = filter_data['module_blacklist']
del filter_data['module_blacklist']
try:
filters['ansible.modules'] = frozenset(filter_data['module_rejectlist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_rejectlist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is rejected
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module reject list file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the reject list.'.format(to_native(filter_cfg)))
return filters
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
def _on_collection_load_handler(collection_name, collection_path):
display.vvvv(to_text('Loading collection {0} from {1}'.format(collection_name, collection_path)))
collection_meta = _get_collection_metadata(collection_name)
try:
if not _does_collection_support_ansible_version(collection_meta.get('requires_ansible', ''), ansible_version):
mismatch_behavior = C.config.get_config_value('COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH')
message = 'Collection {0} does not support Ansible version {1}'.format(collection_name, ansible_version)
if mismatch_behavior == 'warning':
display.warning(message)
elif mismatch_behavior == 'error':
raise AnsibleCollectionUnsupportedVersionError(message)
except AnsibleError:
raise
except Exception as ex:
display.warning('Error parsing collection metadata requires_ansible value from collection {0}: {1}'.format(collection_name, ex))
def _does_collection_support_ansible_version(requirement_string, ansible_version):
if not requirement_string:
return True
if not SpecifierSet:
display.warning('packaging Python module unavailable; unable to validate collection Ansible version requirements')
return True
ss = SpecifierSet(requirement_string)
# ignore prerelease/postrelease/beta/dev flags for simplicity
base_ansible_version = Version(ansible_version).base_version
return ss.contains(base_ansible_version)
def _configure_collection_loader(prefix_collections_path=None):
if AnsibleCollectionConfig.collection_finder:
# this must be a Python warning so that it can be filtered out by the import sanity test
warnings.warn('AnsibleCollectionFinder has already been configured')
return
if prefix_collections_path is None:
prefix_collections_path = []
paths = list(prefix_collections_path) + C.COLLECTIONS_PATHS
finder = _AnsibleCollectionFinder(paths, C.COLLECTIONS_SCAN_SYS_PATH)
finder._install()
# this should succeed now
AnsibleCollectionConfig.on_collection_load += _on_collection_load_handler
def init_plugin_loader(prefix_collections_path=None):
"""Initialize the plugin filters and the collection loaders
This method must be called to configure and insert the collection python loaders
into ``sys.meta_path`` and ``sys.path_hooks``.
This method is only called in ``CLI.run`` after CLI args have been parsed, so that
instantiation of the collection finder can utilize parsed CLI args, and to not cause
side effects.
"""
_load_plugin_filter()
_configure_collection_loader(prefix_collections_path)
# TODO: Evaluate making these class instantiations lazy, but keep them in the global scope
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
AnsibleJinja2Filter
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins',
AnsibleJinja2Test
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,561 |
Deprecation warning fails to state what is actually deprecated
|
### Summary
In my Ansible runs, I started seeing the following deprecation warning:
```
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
The question is, _what_ should be replaced with "ansible.utils.ipmath"?
### Issue Type
Bug Report
### Component Name
deprecation warning
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/etc/ansible/library/usd']
ansible python module location = /var/home/ansiblectl/.local/lib/python3.9/site-packages/ansible
ansible collection location = /etc/ansible/collections:/var/home/ansiblectl/.ansible/collections:/usr/share/ansible/collections
executable location = /var/home/ansiblectl/.local/bin/ansible
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = yaml
CACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /var/home/ansiblectl/.ansible_cache
CACHE_PLUGIN_PREFIX(/etc/ansible/ansible.cfg) = usd-
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 1800
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = ['/etc/ansible/collections', '/var/home/ansiblectl/.ansible/collections', '/usr/share/ansible/collections']
DEFAULT_FORCE_HANDLERS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 100
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/linux.yaml', '/etc/ansible/inventory/satellite.foreman.yml', '/etc/ansible/inventory/inventory.foreman.yml']
DEFAULT_JINJA2_EXTENSIONS(/etc/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = <redacted>
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/library/usd']
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = svc-ansible
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles']
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 300
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = <redacted>
DISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['host_list', 'yaml', 'script', 'ini', 'theforeman.foreman.foreman']
BECOME:
======
sudo:
____
become_flags(/etc/ansible/ansible.cfg) = -H -E -S -n
CACHE:
=====
jsonfile:
________
_prefix(/etc/ansible/ansible.cfg) = usd-
_timeout(/etc/ansible/ansible.cfg) = 1800
_uri(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /var/home/ansiblectl/.ansible_cache
CALLBACK:
========
default:
_______
display_skipped_hosts(/etc/ansible/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
remote_user(/etc/ansible/ansible.cfg) = svc-ansible
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o IdentityAgent=none
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
pipelining(/etc/ansible/ansible.cfg) = True
reconnection_retries(/etc/ansible/ansible.cfg) = 3
remote_user(/etc/ansible/ansible.cfg) = svc-ansible
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o IdentityAgent=none
timeout(/etc/ansible/ansible.cfg) = 300
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Rhel 8.7, python 3.6.8, Ansible installed with pip.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Unknown - that's the problem!
### Expected Results
I expect the deprecation warning to indicate what statement is being deprecated similar to the modified deprecation warning below.
```
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead **_of <INSERT MODULE OR CODE HERE>_**. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
### Actual Results
```console
[DEPRECATION WARNING]: Use 'ansible.utils.ipmath' module instead. This feature
will be removed from ansible.netcommon in a release after 2024-01-01.
Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80561
|
https://github.com/ansible/ansible/pull/81719
|
27bbff7c22663543bab0bf096f0b0a857ac4bcf7
|
4d4c50f856bf844ab47a08a2f64fc9697916b50f
| 2023-04-19T03:24:53Z |
python
| 2023-09-29T18:19:16Z |
test/integration/targets/collections/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_COLLECTIONS_PATH=$PWD/collection_root_user:$PWD/collection_root_sys
export ANSIBLE_GATHERING=explicit
export ANSIBLE_GATHER_SUBSET=minimal
export ANSIBLE_HOST_PATTERN_MISMATCH=error
unset ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH
# ensure we can call collection module
ansible localhost -m testns.testcoll.testmodule
# ensure we can call collection module with ansible_collections in path
ANSIBLE_COLLECTIONS_PATH=$PWD/collection_root_sys/ansible_collections ansible localhost -m testns.testcoll.testmodule
echo "--- validating callbacks"
# validate FQ callbacks in ansible-playbook
ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible-playbook noop.yml | grep "usercallback says ok"
# use adhoc for the rest of these tests, must force it to load other callbacks
export ANSIBLE_LOAD_CALLBACK_PLUGINS=1
# validate redirected callback
ANSIBLE_CALLBACKS_ENABLED=formerly_core_callback ansible localhost -m debug 2>&1 | grep -- "usercallback says ok"
## validate missing redirected callback
ANSIBLE_CALLBACKS_ENABLED=formerly_core_missing_callback ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'formerly_core_missing_callback'"
## validate redirected + removed callback (fatal)
ANSIBLE_CALLBACKS_ENABLED=formerly_core_removed_callback ansible localhost -m debug 2>&1 | grep -- "testns.testcoll.removedcallback has been removed"
# validate avoiding duplicate loading of callback, even if using diff names
[ "$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback,formerly_core_callback ansible localhost -m debug 2>&1 | grep -c 'usercallback says ok')" = "1" ]
# ensure non existing callback does not crash ansible
ANSIBLE_CALLBACKS_ENABLED=charlie.gomez.notme ansible localhost -m debug 2>&1 | grep -- "Skipping callback plugin 'charlie.gomez.notme'"
unset ANSIBLE_LOAD_CALLBACK_PLUGINS
# adhoc normally shouldn't load non-default plugins- let's be sure
output=$(ANSIBLE_CALLBACKS_ENABLED=testns.testcoll.usercallback ansible localhost -m debug)
if [[ "${output}" =~ "usercallback says ok" ]]; then echo fail; exit 1; fi
echo "--- validating docs"
# test documentation
ansible-doc testns.testcoll.testmodule -vvv | grep -- "- normal_doc_frag"
# same with symlink
ln -s "${PWD}/testcoll2" ./collection_root_sys/ansible_collections/testns/testcoll2
ansible-doc testns.testcoll2.testmodule2 -vvv | grep "Test module"
# now test we can list with symlink
ansible-doc -l -vvv| grep "testns.testcoll2.testmodule2"
echo "testing bad doc_fragments (expected ERROR message follows)"
# test documentation failure
ansible-doc testns.testcoll.testmodule_bad_docfrags -vvv 2>&1 | grep -- "unknown doc_fragment"
echo "--- validating default collection"
# test adhoc default collection resolution (use unqualified collection module with playbook dir under its collection)
echo "testing adhoc default collection support with explicit playbook dir"
ANSIBLE_PLAYBOOK_DIR=./collection_root_user/ansible_collections/testns/testcoll ansible localhost -m testmodule
# we need multiple plays, and conditional import_playbook is noisy and causes problems, so choose here which one to use...
if [[ ${INVENTORY_PATH} == *.winrm ]]; then
export TEST_PLAYBOOK=windows.yml
else
export TEST_PLAYBOOK=posix.yml
echo "testing default collection support"
ansible-playbook -i "${INVENTORY_PATH}" collection_root_user/ansible_collections/testns/testcoll/playbooks/default_collection_playbook.yml "$@"
fi
# test redirects and warnings for filter redirects
echo "testing redirect and deprecation display"
ANSIBLE_DEPRECATION_WARNINGS=yes ansible localhost -m debug -a msg='{{ "data" | testns.testredirect.multi_redirect_filter }}' -vvvvv 2>&1 | tee out.txt
cat out.txt
test "$(grep out.txt -ce 'deprecation1' -ce 'deprecation2' -ce 'deprecation3')" == 3
grep out.txt -e 'redirecting (type: filter) testns.testredirect.multi_redirect_filter to testns.testredirect.redirect_filter1'
grep out.txt -e 'redirecting (type: filter) testns.testredirect.redirect_filter1 to testns.testredirect.redirect_filter2'
grep out.txt -e 'redirecting (type: filter) testns.testredirect.redirect_filter2 to testns.testcoll.testfilter'
echo "--- validating collections support in playbooks/roles"
# run test playbooks
ansible-playbook -i "${INVENTORY_PATH}" -v "${TEST_PLAYBOOK}" "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@"
fi
echo "--- validating bypass_host_loop with collection search"
ansible-playbook -i host1,host2, -v test_bypass_host_loop.yml "$@"
echo "--- validating inventory"
# test collection inventories
ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@"
if [[ ${INVENTORY_PATH} != *.winrm ]]; then
# base invocation tests
ansible-playbook -i "${INVENTORY_PATH}" -v invocation_tests.yml "$@"
# run playbook from collection, test default again, but with FQCN
ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook.yml "$@"
# run playbook from collection, test default again, but with FQCN and no extension
ansible-playbook -i "${INVENTORY_PATH}" testns.testcoll.default_collection_playbook "$@"
# run playbook that imports from collection
ansible-playbook -i "${INVENTORY_PATH}" import_collection_pb.yml "$@"
fi
# test collection inventories
ansible-playbook inventory_test.yml -i a.statichost.yml -i redirected.statichost.yml "$@"
# test plugin loader redirect_list
ansible-playbook test_redirect_list.yml -v "$@"
# test ansiballz cache dupe
ansible-playbook ansiballz_dupe/test_ansiballz_cache_dupe_shortname.yml -v "$@"
# test adjacent with --playbook-dir
export ANSIBLE_COLLECTIONS_PATH=''
ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED=1 ansible-inventory --list --export --playbook-dir=. -v "$@"
# use an inventory source with caching enabled
ansible-playbook -i a.statichost.yml -i ./cache.statichost.yml -v check_populated_inventory.yml
# Check that the inventory source with caching enabled was stored
if [[ "$(find ./inventory_cache -type f ! -path "./inventory_cache/.keep" | wc -l)" -ne "1" ]]; then
echo "Failed to find the expected single cache"
exit 1
fi
CACHEFILE="$(find ./inventory_cache -type f ! -path './inventory_cache/.keep')"
if [[ $CACHEFILE != ./inventory_cache/prefix_* ]]; then
echo "Unexpected cache file"
exit 1
fi
# Check the cache for the expected hosts
if [[ "$(grep -wc "cache_host_a" "$CACHEFILE")" -ne "1" ]]; then
echo "Failed to cache host as expected"
exit 1
fi
if [[ "$(grep -wc "dynamic_host_a" "$CACHEFILE")" -ne "0" ]]; then
echo "Cached an incorrect source"
exit 1
fi
./vars_plugin_tests.sh
./test_task_resolved_plugin.sh
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,714 |
Remove deprecated JINJA2_NATIVE_WARNING.env.0
|
### Summary
The config option `JINJA2_NATIVE_WARNING.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.17.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.17
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/81714
|
https://github.com/ansible/ansible/pull/81720
|
ab6a544e8626eb6767e9578d63b41313f287c796
|
e756e359e0c1946fe5a6e9059a3108d20e32440d
| 2023-09-18T20:49:14Z |
python
| 2023-10-02T19:57:17Z |
changelogs/fragments/81714-remove-deprecated-jinja2_native_warning.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,714 |
Remove deprecated JINJA2_NATIVE_WARNING.env.0
|
### Summary
The config option `JINJA2_NATIVE_WARNING.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.17.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.17
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/81714
|
https://github.com/ansible/ansible/pull/81720
|
ab6a544e8626eb6767e9578d63b41313f287c796
|
e756e359e0c1946fe5a6e9059a3108d20e32440d
| 2023-09-18T20:49:14Z |
python
| 2023-10-02T19:57:17Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to choose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept a list of cowsay templates that are 'safe' to use, set to an empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice.
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when the remote user and become user are the same. In other words root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. ``--become-password-file``.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method.
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin.
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables.
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data.
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections.
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: An ordered list of root paths for loading installed Ansible collections content.
description: >
Colon-separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS
deprecated:
why: does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead
version: "2.19"
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
deprecated:
why: does not fit var naming standard, use the singular form collections_path instead
version: "2.19"
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
COLOR_CHANGED:
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status.
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console.
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages.
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages.
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs.
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs.
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs.
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages.
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
name: Color for highlighting
default: white
description: Defines the color to use for highlighting.
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status.
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status.
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status.
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. In other words, those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages.
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. ``--connection-password-file``.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports into.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have their coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default, Ansible will issue a warning when received from a task action (module or action plugin).
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default, Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default, Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon-separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon-separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however, users should first consider adding allow_unsafe=True to any lookups that may be expected to contain data that may be run
through the templating engine late.
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH.'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon-separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon-separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon-separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon-separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon-separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under, which is required for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases, it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fall back to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon-separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) nonportable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience, this is rarely needed and is a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma-separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon-separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon-separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to LXC containers by passing ``--noseclabel`` parameter to ``virsh`` command.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file.
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon-separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant to those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon-separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon-separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon-separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts, this will disable a newer
style PowerShell modules from writing to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying ``--private-key`` with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
- Starting in version '2.17' M(ansible.builtin.include_roles) and M(ansible.builtin.import_roles) can override this via the C(public) parameter.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon-separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list without causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
EDITOR:
name: editor application to use
default: vi
descrioption:
- for the cases in which Ansible needs to return a file within an editor, this chooses the application to use.
ini:
- section: defaults
key: editor
version_added: '2.15'
env:
- name: ANSIBLE_EDITOR
version_added: '2.15'
- name: EDITOR
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, and False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon-separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target.
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon-separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon-separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
name: Connection plugin
default: ssh
description:
- Can be any connection plugin available to your ansible installation.
- There is also a (DEPRECATED) special 'smart' option, that will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions.
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon-separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id.'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided.'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
VAULT_ENCRYPT_SALT:
name: Vault salt to use for encryption
default: ~
description: 'The salt to use for the vault encryption. If it is not provided, a random salt will be used.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_SALT}]
ini:
- {key: vault_encrypt_salt, section: defaults}
version_added: '2.15'
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The ``--encrypt-vault-id`` CLI option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple ``--vault-id`` args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to ``--vault-password-file`` or ``--vault-id``.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel.
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: Number of lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks that have sensitive values
:ref:`keep_secret_data` for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback."
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with a valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default, Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_SERVER_TIMEOUT:
name: Default timeout to use for API calls
description:
- The default timeout for Galaxy API calls. Galaxy servers that don't configure a specific timeout will fall back to this value.
env: [{name: ANSIBLE_GALAXY_SERVER_TIMEOUT}]
default: 60
ini:
- {key: server_timeout, section: galaxy}
type: int
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTIONS_PATH_WARNING:
name: "ansible-galaxy collection install collections path warnings"
description: "whether ``ansible-galaxy collection install`` should warn about ``--collections-path`` missing from configured :ref:`collections_paths`."
default: true
type: bool
env: [{name: ANSIBLE_GALAXY_COLLECTIONS_PATH_WARNING}]
ini:
- {key: collections_path_warning, section: galaxy}
version_added: "2.16"
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputting the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verification.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it.
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backward-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.12
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors.
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparsable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source.
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source.
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise, this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native.
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display.
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load.
- This is for rejecting script and binary module fallback extensions.
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
MODULE_STRICT_UTF8_RESPONSE:
name: Module strict UTF-8 response
description:
- Enables whether module responses are evaluated for containing non-UTF-8 data.
- Disabling this may result in unexpected behavior.
- Only ansible-core should evaluate this configuration.
env: [{name: ANSIBLE_MODULE_STRICT_UTF8_RESPONSE}]
ini:
- {key: module_strict_utf8_response, section: defaults}
type: bool
default: True
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviors in which a plugin loaded in previous plays would be unexpectedly 'sticky'. This setting allows the user to return to that behavior.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PAGER:
name: pager application to use
default: less
descrioption:
- for the cases in which Ansible needs to return output in a pageable fashion, this chooses the application to use.
ini:
- section: defaults
key: pager
version_added: '2.15'
env:
- name: ANSIBLE_PAGER
version_added: '2.15'
- name: PAGER
PARAMIKO_HOST_KEY_AUTO_ADD:
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
deprecated:
why: This option was moved to the plugin itself
version: "2.20"
alternatives: Use the option from the plugin itself.
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
deprecated:
why: This option was moved to the plugin itself
version: "2.20"
alternatives: Use the option from the plugin itself.
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to the socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for a response from a remote device before timing out a persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on the user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output.'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables."
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running Ansible itself (not on the managed hosts).
- These may include warnings about third-party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.19"
alternatives: There is no alternative at the moment. A different mechanism would have to be implemented in the current code base.
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
changelogs/fragments/81666-handlers-run_once.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
lib/ansible/playbook/handler.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.task import Task
from ansible.module_utils.six import string_types
class Handler(Task):
listen = NonInheritableFieldAttribute(isa='list', default=list, listof=string_types, static=True)
def __init__(self, block=None, role=None, task_include=None):
self.notified_hosts = []
self.cached_name = False
super(Handler, self).__init__(block=block, role=role, task_include=task_include)
def __repr__(self):
''' returns a human readable representation of the handler '''
return "HANDLER: %s" % self.get_name()
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
t = Handler(block=block, role=role, task_include=task_include)
return t.load_data(data, variable_manager=variable_manager, loader=loader)
def notify_host(self, host):
if not self.is_host_notified(host):
self.notified_hosts.append(host)
return True
return False
def remove_host(self, host):
self.notified_hosts = [h for h in self.notified_hosts if h != host]
def is_host_notified(self, host):
return host in self.notified_hosts
def serialize(self):
result = super(Handler, self).serialize()
result['is_handler'] = True
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import queue
import sys
import threading
import time
import typing as t
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, PlayIterator
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend, DisplaySend, PromptSend
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars, isidentifier
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, DisplaySend):
dmethod = getattr(display, result.method)
dmethod(*result.args, **result.kwargs)
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
strategy._results.append(result)
elif isinstance(result, PromptSend):
try:
value = display.prompt_until(
result.prompt,
private=result.private,
seconds=result.seconds,
complete_input=result.complete_input,
interrupt_input=result.interrupt_input,
)
except AnsibleError as e:
value = e
except BaseException as e:
# relay unexpected errors so bugs in display are reported and don't cause workers to hang
try:
raise AnsibleError(f"{e}") from e
except AnsibleError as e:
value = e
strategy._workers[result.worker_id].worker_queue.put(value)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator.host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
self._worker_queues = dict()
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(self._tqm._unreachable_hosts.keys()) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(iterator.get_failed_hosts()) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by two
# functions: linear.py::run(), and
# free.py::run() so we'd have to add to both to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
# Pass WorkerProcess its strategy worker number so it can send an identifier along with intra-task requests
worker_prc = WorkerProcess(
self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader, self._cur_worker,
)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
def search_handlers_by_notification(self, notification: str, iterator: PlayIterator) -> t.Generator[Handler, None, None]:
templar = Templar(None)
handlers = [h for b in reversed(iterator._play.handlers) for h in b.block]
# iterate in reversed order since last handler loaded with the same name wins
for handler in handlers:
if not handler.name:
continue
if not handler.cached_name:
if templar.is_template(handler.name):
templar.available_variables = self._variable_manager.get_vars(
play=iterator._play,
task=handler,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all
)
try:
handler.name = templar.template(handler.name)
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler.name, to_text(e))
)
continue
handler.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
if notification in {
handler.name,
handler.get_name(include_role_fqcn=False),
handler.get_name(include_role_fqcn=True),
}:
yield handler
break
templar.available_variables = {}
seen = []
for handler in handlers:
if listeners := handler.listen:
if notification in handler.get_validated_value(
'listen',
handler.fattributes.get('listen'),
listeners,
templar,
):
if handler.name and handler.name in seen:
continue
seen.append(handler.name)
yield handler
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
# save the current state before failing it for later inspection
state_when_failed = iterator.get_state_for_host(original_host.name)
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
state, dummy = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
if iterator.is_any_block_rescuing(state_when_failed):
self._tqm._stats.increment('rescued', original_host.name)
iterator._play._removed_hosts.remove(original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item and task_result.is_changed():
# only ensure that notified handlers exist, if so save the notifications for when
# handlers are actually flushed so the last defined handlers are exexcuted,
# otherwise depending on the setting either error or warn
host_state = iterator.get_state_for_host(original_host.name)
for notification in result_item['_ansible_notify']:
for handler in self.search_handlers_by_notification(notification, iterator):
if host_state.run_state == IteratingStates.HANDLERS:
# we're currently iterating handlers, so we need to expand this now
if handler.notify_host(original_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, original_host)
else:
iterator.add_notification(original_host.name, notification)
display.vv(f"Notification for handler {notification} has been saved.")
break
else:
msg = (
f"The requested handler '{notification}' was not found in either the main handlers"
" list nor in the listening handlers list"
)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._inventory.add_dynamic_host(new_host_info, result_item)
# ensure host is available for subsequent plays
if result_item.get('changed') and new_host_info['host_name'] not in self._hosts_cache_all:
self._hosts_cache_all.append(new_host_info['host_name'])
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._inventory.add_dynamic_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, Templar(self._loader), all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
if not isidentifier(original_task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % original_task.register)
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
role_obj = self._get_cached_role(original_task, iterator._play)
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if isinstance(original_task, Handler):
for handler in (h for b in iterator._play.handlers for h in b.block if h._uuid == original_task._uuid):
handler.remove_host(original_host)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars | included_file._vars
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
Raises AnsibleError exception in case of a failure during including a file,
in such case the caller is responsible for marking the host(s) as failed
using PlayIterator.mark_host_failed().
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
raise AnsibleError(reason) from e
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = meta_action
skip_reason = '%s conditional evaluated to False' % meta_action
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
if _evaluate_conditional(target_host):
host_state = iterator.get_state_for_host(target_host.name)
# actually notify proper handlers based on all notifications up to this point
for notification in list(host_state.handler_notifications):
for handler in self.search_handlers_by_notification(notification, iterator):
if handler.notify_host(target_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, target_host)
iterator.clear_notification(target_host.name, notification)
if host_state.run_state == IteratingStates.HANDLERS:
raise AnsibleError('flush_handlers cannot be used as a handler')
if target_host.name not in self._tqm._unreachable_hosts:
host_state.pre_flushing_run_state = host_state.run_state
host_state.run_state = IteratingStates.HANDLERS
msg = "triggered running handlers for %s" % target_host.name
else:
skipped = True
skip_reason += ', not running handlers for %s' % target_host.name
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.clear_host_errors(host)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
role_obj = self._get_cached_role(task, iterator._play)
if target_host.name in role_obj._had_task_run:
role_obj._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
if not task.implicit:
header = skip_reason if skipped else msg
display.vv(f"META: {header}")
if isinstance(task, Handler):
task.remove_host(target_host)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def _get_cached_role(self, task, play):
role_path = task._role.get_role_path()
role_cache = play.role_cache[role_path]
try:
idx = role_cache.index(task._role)
return role_cache[idx]
except ValueError:
raise AnsibleError(f'Cannot locate {task._role.get_name()} in role cache')
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
lib/ansible/plugins/strategy/free.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: free
short_description: Executes tasks without waiting for all hosts
description:
- Task execution is as fast as possible per batch as defined by C(serial) (default all).
Ansible will not wait for other hosts to finish the current task before queuing more tasks for other hosts.
All hosts are still attempted for the current task, but it prevents blocking new tasks for hosts that have already finished.
- With the free strategy, unlike the default linear strategy, a host that is slow or stuck on a specific task
won't hold up the rest of the hosts and tasks.
version_added: "2.0"
author: Ansible Core Team
'''
import time
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.module_utils.common.text.converters import to_text
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
# This strategy manages throttling on its own, so we don't want it done in queue_task
ALLOW_BASE_THROTTLING = False
def __init__(self, tqm):
super(StrategyModule, self).__init__(tqm)
self._host_pinned = False
def run(self, iterator, play_context):
'''
The "free" strategy is a bit more complex, in that it allows tasks to
be sent to hosts as quickly as they can be processed. This means that
some hosts may finish very quickly if run tasks result in little or no
work being done versus other systems.
The algorithm used here also tries to be more "fair" when iterating
through hosts by remembering the last host in the list to be given a task
and starting the search from there as opposed to the top of the hosts
list again, which would end up favoring hosts near the beginning of the
list.
'''
# the last host to be given a task
last_host = 0
result = self._tqm.RUN_OK
# start with all workers being counted as being free
workers_free = len(self._workers)
self._set_hosts_cache(iterator._play)
if iterator._play.max_fail_percentage is not None:
display.warning("Using max_fail_percentage with the free strategy is not supported, as tasks are executed independently on each host")
work_to_do = True
while work_to_do and not self._tqm._terminated:
hosts_left = self.get_hosts_left(iterator)
if len(hosts_left) == 0:
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result = False
break
work_to_do = False # assume we have no more work to do
starting_host = last_host # save current position so we know when we've looped back around and need to break
# try and find an unblocked host with a task to run
host_results = []
while True:
host = hosts_left[last_host]
display.debug("next free host: %s" % host)
host_name = host.get_name()
# peek at the next task for the host, to see if there's
# anything to do do for this host
(state, task) = iterator.get_next_task_for_host(host, peek=True)
display.debug("free host state: %s" % state, host=host_name)
display.debug("free host task: %s" % task, host=host_name)
# check if there is work to do, either there is a task or the host is still blocked which could
# mean that it is processing an include task and after its result is processed there might be
# more tasks to run
if (task or self._blocked_hosts.get(host_name, False)) and not self._tqm._unreachable_hosts.get(host_name, False):
display.debug("this host has work to do", host=host_name)
# set the flag so the outer loop knows we've still found
# some work which needs to be done
work_to_do = True
if not self._tqm._unreachable_hosts.get(host_name, False) and task:
# check to see if this host is blocked (still executing a previous task)
if not self._blocked_hosts.get(host_name, False):
display.debug("getting variables", host=host_name)
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables", host=host_name)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
if throttle > 0:
same_tasks = 0
for worker in self._workers:
if worker and worker.is_alive() and worker._task._uuid == task._uuid:
same_tasks += 1
display.debug("task: %s, same_tasks: %d" % (task.get_name(), same_tasks))
if same_tasks >= throttle:
break
# advance the host, mark the host blocked, and queue it
self._blocked_hosts[host_name] = True
iterator.set_state_for_host(host.name, state)
try:
action = action_loader.get(task.action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating", host=host_name)
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason", host=host_name)
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if run_once:
if action and getattr(action, 'BYPASS_HOST_LOOP', False):
raise AnsibleError("The '%s' module bypasses the host loop, which is currently not supported in the free strategy "
"and would instead execute for every host in the inventory list." % task.action, obj=task._ds)
else:
display.warning("Using run_once with the free strategy is not currently supported. This task will still be "
"executed for every host in the inventory list.")
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task, host=host_name)
del self._blocked_hosts[host_name]
continue
if task.action in C._ACTION_META:
self._execute_meta(task, play_context, iterator, target_host=host)
self._blocked_hosts[host_name] = False
else:
# handle step if needed, skip meta actions as they are used internally
if not self._step or self._take_step(task, host_name):
if task.any_errors_fatal:
display.warning("Using any_errors_fatal with the free strategy is not supported, "
"as tasks are executed independently on each host")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
self._queue_task(host, task, task_vars, play_context)
# each task is counted as a worker being busy
workers_free -= 1
del task_vars
else:
display.debug("%s is blocked, skipping for now" % host_name)
# all workers have tasks to do (and the current host isn't done with the play).
# loop back to starting host and break out
if self._host_pinned and workers_free == 0 and work_to_do:
last_host = starting_host
break
# move on to the next host and make sure we
# haven't gone past the end of our hosts list
last_host += 1
if last_host > len(hosts_left) - 1:
last_host = 0
# if we've looped around back to the start, break out
if last_host == starting_host:
break
results = self._process_pending_results(iterator)
host_results.extend(results)
# each result is counted as a worker being free again
workers_free += len(results)
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
all_blocks = dict((host, []) for host in hosts_left)
failed_includes_hosts = set()
for included_file in included_files:
display.debug("collecting new blocks for %s" % included_file)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
final_block = new_block.filter_tagged_tasks(task_vars)
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done collecting new blocks for %s" % included_file)
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
display.debug("adding all collected blocks from %d included file(s) to iterator" % len(included_files))
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
display.debug("done adding collected blocks to iterator")
# pause briefly so we don't spin lock
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
# collect all the final results
results = self._wait_on_pending_results(iterator)
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils.common.text.converters import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, dummy in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action in C._ACTION_META and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, dummy) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_listen_role_dedup.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'a handler from a role')" = "1" ]
ansible localhost -m include_role -a "name=r1-dep_chain-vars" "$@"
ansible-playbook test_include_tasks_in_include_role.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,666 |
Handler that should run once is played twice
|
### Summary
When calling a handler that is a task that should be run only once, the task is called twice.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.6]
config file = /MY_PROJECT/ansible/ansible.cfg
configured module search path = ['/MY_PROJECT/ansible/library']
ansible python module location = /home/naja/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/naja/.ansible/collections:/usr/share/ansible/collections
executable location = /home/naja/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /MY_PROJECT/ansible/ansible.cfg
DEFAULT_BECOME_METHOD(/MY_PROJECT/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/MY_PROJECT/ansible/ansible.cfg) = root
DEFAULT_FILTER_PLUGIN_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ans>
DEFAULT_FORKS(/MY_PROJECT/ansible/ansible.cfg) = 10
DEFAULT_GATHERING(/MY_PROJECT/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/MY_PROJECT/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/inve>
DEFAULT_JINJA2_NATIVE(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/MY_PROJECT/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/MY_PROJECT/ansible/ansible.cfg) = /MY_PROJECT/ansible/logs/an>
DEFAULT_MODULE_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/li>
DEFAULT_MODULE_UTILS_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansi>
DEFAULT_ROLES_PATH(/MY_PROJECT/ansible/ansible.cfg) = ['/MY_PROJECT/ansible/rol>
DEFAULT_STDOUT_CALLBACK(/MY_PROJECT/ansible/ansible.cfg) = default.py
DEFAULT_VAULT_IDENTITY_LIST(/MY_PROJECT/ansible/ansible.cfg) = ['dts@vault/keyring-client.py', 'prod@vault/keyring-c>
HOST_KEY_CHECKING(/MY_PROJECT/ansible/ansible.cfg) = False
BECOME:
======
runas:
_____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
su:
__
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
sudo:
____
become_user(/MY_PROJECT/ansible/ansible.cfg) = root
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/MY_PROJECT/ansible/ansible.cfg) = False
scp_if_ssh(/MY_PROJECT/ansible/ansible.cfg) = True
timeout(/MY_PROJECT/ansible/ansible.cfg) = 60
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
Playbook 1 (pause will be prompted once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
pause:
prompt: Please ping
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
Playbook 2 (ping handler will be executed twice)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
```
Playbook 3 (ping handler will be executed once with flush_handlers, then 2 more times at the end of the play)
```yaml
- hosts: all
handlers:
- name: Ping again
run_once: yes
ping:
tasks:
- name: Update something
notify: Ping again
changed_when: yes
ping:
- name: Flush handlers
meta: flush_handlers
```
### Expected Results
I expect the handler to be run once, no matter the number of hosts
```
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host2]
changed: [host1]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [hostX]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
PLAY [all] ***********************************************************************************************************************************************
TASK [Update something] **********************************************************************************************************************************
changed: [host4]
changed: [host1]
changed: [host2]
changed: [host3]
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
TASK [Flush handlers] ************************************************************************************************************************************
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host2]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host4]
RUNNING HANDLER [Ping again] *****************************************************************************************************************************
[Ping again]
Please ping:
^Mok: [host3]
PLAY RECAP ***********************************************************************************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81666
|
https://github.com/ansible/ansible/pull/81667
|
000cf1dd468a1b8db2f7db723377bd8efa909b95
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
| 2023-09-08T07:56:43Z |
python
| 2023-10-03T18:43:46Z |
test/integration/targets/handlers/test_run_once.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,882 |
Amazon Linux 2 results in Python interpreter discovery warning
|
### Summary
Amazon Linux 2 results in Python interpreter discovery warning, because RedHat in [OS_FAMILY_MAP](https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/system/distribution.py) has 'Amazon' as the value, rather than 'amzn' as is actually returned by Amazon Linux 2, which then trips up on:
https://github.com/ansible/ansible/blob/9f4dfff69bfc9f33a487e1c7fee2fbce64c62c9c/lib/ansible/executor/interpreter_discovery.py#LL115C1-L117C92
```
version_map = platform_python_map.get(distro.lower().strip()) or platform_python_map.get(family)
if not version_map:
raise NotImplementedError('unsupported Linux distribution: {0}'.format(distro))
```
If I add 'amzn' to the RedHat list in OS_FAMILY_MAP, /usr/bin/python is used as the interpreter, and no warning is issued. Using distribution.py as is, a warning is issued, and python3.7 is used, which was additionally installed on the test system.
### Issue Type
Bug Report
### Component Name
Python interpreter discovery
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = <redacted>/ansible.cfg
configured module search path = ['<redacted>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = <redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 00:46:44) [GCC 12.2.1 20230201] (/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = <redacted>/ansible.cfg
DEFAULT_FORKS(<redacted>/ansible.cfg) = 30
DEFAULT_HOST_LIST(<redacted>/ansible.cfg) = ['<redacted>/inventory']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['<redacted>']
DEFAULT_TIMEOUT(<redacted>/ansible.cfg) = 60
EDITOR(env: EDITOR) = /usr/bin/vi
HOST_KEY_CHECKING(<redacted>/ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(<redacted>/ansible.cfg) = False
timeout(<redacted>/ansible.cfg) = 60
ssh:
___
host_key_checking(<redacted>/ansible.cfg) = False
timeout(<redacted>/ansible.cfg) = 60
```
### OS / Environment
Amazon Linux 2
### Steps to Reproduce
Gather facts from Amazon Linux 2:
```
<redacted> (0, b'{"platform_dist_result": ["", "", ""], "osrelease_content": "NAME=\\"Amazon Linux\\"\\nVERSION=\\"2\\"\\nID=\\"amzn\\"\\nID_LIKE=\\"centos rhel fedora\\"\\nVERSION_ID=\\"2\\"\\nPRETTY_NAME=\\"Amazon Linux 2\\"\\nANSI_COLOR=\\"0;33\\"\\nCPE_NAME=\\"cpe:2.3:o:amazon:amazon_linux:2\\"\\nHOME_URL=\\"https://amazonlinux.com/\\"\\n"}\n', b'<stdin>:29: DeprecationWarning: dist() and linux_distribution() functions are deprecated in Python 3.5\n')
```
### Expected Results
No python interpreter warning.
### Actual Results
```console
[WARNING]: Platform linux on host <redacted> is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of
another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.13/reference_appendices/interpreter_discovery.html for more information.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80882
|
https://github.com/ansible/ansible/pull/81755
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
| 2023-05-24T21:29:59Z |
python
| 2023-10-03T18:54:14Z |
changelogs/fragments/80882-Amazon-os-family-compat.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,882 |
Amazon Linux 2 results in Python interpreter discovery warning
|
### Summary
Amazon Linux 2 results in Python interpreter discovery warning, because RedHat in [OS_FAMILY_MAP](https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/system/distribution.py) has 'Amazon' as the value, rather than 'amzn' as is actually returned by Amazon Linux 2, which then trips up on:
https://github.com/ansible/ansible/blob/9f4dfff69bfc9f33a487e1c7fee2fbce64c62c9c/lib/ansible/executor/interpreter_discovery.py#LL115C1-L117C92
```
version_map = platform_python_map.get(distro.lower().strip()) or platform_python_map.get(family)
if not version_map:
raise NotImplementedError('unsupported Linux distribution: {0}'.format(distro))
```
If I add 'amzn' to the RedHat list in OS_FAMILY_MAP, /usr/bin/python is used as the interpreter, and no warning is issued. Using distribution.py as is, a warning is issued, and python3.7 is used, which was additionally installed on the test system.
### Issue Type
Bug Report
### Component Name
Python interpreter discovery
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = <redacted>/ansible.cfg
configured module search path = ['<redacted>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = <redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 00:46:44) [GCC 12.2.1 20230201] (/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = <redacted>/ansible.cfg
DEFAULT_FORKS(<redacted>/ansible.cfg) = 30
DEFAULT_HOST_LIST(<redacted>/ansible.cfg) = ['<redacted>/inventory']
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = ['<redacted>']
DEFAULT_TIMEOUT(<redacted>/ansible.cfg) = 60
EDITOR(env: EDITOR) = /usr/bin/vi
HOST_KEY_CHECKING(<redacted>/ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(<redacted>/ansible.cfg) = False
timeout(<redacted>/ansible.cfg) = 60
ssh:
___
host_key_checking(<redacted>/ansible.cfg) = False
timeout(<redacted>/ansible.cfg) = 60
```
### OS / Environment
Amazon Linux 2
### Steps to Reproduce
Gather facts from Amazon Linux 2:
```
<redacted> (0, b'{"platform_dist_result": ["", "", ""], "osrelease_content": "NAME=\\"Amazon Linux\\"\\nVERSION=\\"2\\"\\nID=\\"amzn\\"\\nID_LIKE=\\"centos rhel fedora\\"\\nVERSION_ID=\\"2\\"\\nPRETTY_NAME=\\"Amazon Linux 2\\"\\nANSI_COLOR=\\"0;33\\"\\nCPE_NAME=\\"cpe:2.3:o:amazon:amazon_linux:2\\"\\nHOME_URL=\\"https://amazonlinux.com/\\"\\n"}\n', b'<stdin>:29: DeprecationWarning: dist() and linux_distribution() functions are deprecated in Python 3.5\n')
```
### Expected Results
No python interpreter warning.
### Actual Results
```console
[WARNING]: Platform linux on host <redacted> is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of
another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.13/reference_appendices/interpreter_discovery.html for more information.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80882
|
https://github.com/ansible/ansible/pull/81755
|
2d5861c185fb24441e3d3919749866a6fc5c12d7
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
| 2023-05-24T21:29:59Z |
python
| 2023-10-03T18:54:14Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
import ansible.module_utils.compat.typing as t
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content, get_file_lines
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/centos-release', 'name': 'CentOS'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/os-release', 'name': 'Amazon'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/os-release', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
if path == '/etc/os-release':
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
distribution_version = version.group(1)
amazon_facts['distribution_version'] = distribution_version
version_data = distribution_version.split(".")
if len(version_data) > 1:
major, minor = version_data
else:
major, minor = version_data[0], 'NA'
amazon_facts['distribution_major_version'] = major
amazon_facts['distribution_minor_version'] = minor
else:
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
debian_version_path = '/etc/debian_version'
distdata = get_file_lines(debian_version_path)
for line in distdata:
m = re.search(r'(\d+)\.(\d+)', line.strip())
if m:
debian_facts['distribution_minor_version'] = m.groups()[1]
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
elif 'UOS' in data or 'Uos' in data or 'uos' in data:
debian_facts['distribution'] = 'Uos'
release = re.search(r"VERSION_CODENAME=\"?([^\"]+)\"?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
elif 'Deepin' in data or 'deepin' in data:
debian_facts['distribution'] = 'Deepin'
release = re.search(r"VERSION_CODENAME=\"?([^\"]+)\"?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() != 'flatcar':
return False, flatcar_facts
if not data:
return False, flatcar_facts
version = re.search("VERSION=(.*)", data)
if version:
flatcar_facts['distribution_major_version'] = version.group(1).strip('"').split('.')[0]
flatcar_facts['distribution_version'] = version.group(1).strip('"')
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
def parse_distribution_file_CentOS(self, name, data, path, collected_facts):
centos_facts = {}
if 'CentOS Stream' in data:
centos_facts['distribution_release'] = 'Stream'
return True, centos_facts
if "TencentOS Server" in data:
centos_facts['distribution'] = 'TencentOS'
return True, centos_facts
return False, centos_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'RHEL', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler', 'AlmaLinux', 'Rocky', 'TencentOS',
'EuroLinux', 'Kylin Linux Advanced Server'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux', 'Uos', 'Deepin', 'OSMC'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SMGL': ['SMGL'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT|RC|PRERELEASE).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family']) # type: t.Set[str]
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,053 |
Jinja in tags resolve issue
|
### Summary
For example take this task:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
#tags: "{{ item.tags }}"
when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
Here assume the following types for the jinja templates:
item.tags: list
item.name: string
cfg.value.data: object with attributes: name and tags
When ran like this (because of the commented tags) it will work as intended with a little less functionality.
When ran as follows:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
It will error out saying that tags waits a list of class string or class int and got class list.
In normal tasks this issue does not appear, only on include_tasks and on task blocks.
The scenario where this is a drawback:
Assume we want to include generic tasks from a directory. The tasks include other tasks tagged with something. If we use the "when" directive to check tags then we have to add the include task's tag into the attribute list cause if we do not, then the include will not get evaluated which means the tasks won't get included and therefore the subtags will not be part of the playbook and it won't run as expected.
### Issue Type
Bug Report
### Component Name
task.py, block.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/levi/ansible/ansible.cfg
configured module search path = ['/home/levi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config dump --only-changed
[DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible, use mostly the same config will work, but now controlled
from the plugin itself and not using the general constant. instead. This feature will be removed from ansible-base in version 2.14. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ALLOW_WORLD_READABLE_TMPFILES(/home/levi/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/levi/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/levi/ansible/ansible.cfg) = ['/home/levi/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/levi/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/levi/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/levi/ansible/ansible.cfg) = 40
DISPLAY_SKIPPED_HOSTS(/home/levi/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/levi/ansible/ansible.cfg) = False
```
### OS / Environment
ubuntu 20.04
But this issue is really platform independent.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
vars:
cfg:
value:
data:
- tags:
- TagA
- TagB
name: "something"
```
### Expected Results
Apply the array of tags to the included packages normally.
### Actual Results
```console
Error tags must be a list of (class string or class int) but got class list
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81053
|
https://github.com/ansible/ansible/pull/81624
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
|
9b3ed5ec68a6edde5b061b18b9ebc603c3b87cc8
| 2023-06-13T17:28:44Z |
python
| 2023-10-03T19:07:26Z |
changelogs/fragments/81053-templated-tags-inheritance.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,053 |
Jinja in tags resolve issue
|
### Summary
For example take this task:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
#tags: "{{ item.tags }}"
when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
Here assume the following types for the jinja templates:
item.tags: list
item.name: string
cfg.value.data: object with attributes: name and tags
When ran like this (because of the commented tags) it will work as intended with a little less functionality.
When ran as follows:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
It will error out saying that tags waits a list of class string or class int and got class list.
In normal tasks this issue does not appear, only on include_tasks and on task blocks.
The scenario where this is a drawback:
Assume we want to include generic tasks from a directory. The tasks include other tasks tagged with something. If we use the "when" directive to check tags then we have to add the include task's tag into the attribute list cause if we do not, then the include will not get evaluated which means the tasks won't get included and therefore the subtags will not be part of the playbook and it won't run as expected.
### Issue Type
Bug Report
### Component Name
task.py, block.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/levi/ansible/ansible.cfg
configured module search path = ['/home/levi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config dump --only-changed
[DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible, use mostly the same config will work, but now controlled
from the plugin itself and not using the general constant. instead. This feature will be removed from ansible-base in version 2.14. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ALLOW_WORLD_READABLE_TMPFILES(/home/levi/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/levi/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/levi/ansible/ansible.cfg) = ['/home/levi/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/levi/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/levi/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/levi/ansible/ansible.cfg) = 40
DISPLAY_SKIPPED_HOSTS(/home/levi/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/levi/ansible/ansible.cfg) = False
```
### OS / Environment
ubuntu 20.04
But this issue is really platform independent.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
vars:
cfg:
value:
data:
- tags:
- TagA
- TagB
name: "something"
```
### Expected Results
Apply the array of tags to the included packages normally.
### Actual Results
```console
Error tags must be a list of (class string or class int) but got class list
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81053
|
https://github.com/ansible/ansible/pull/81624
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
|
9b3ed5ec68a6edde5b061b18b9ebc603c3b87cc8
| 2023-06-13T17:28:44Z |
python
| 2023-10-03T19:07:26Z |
lib/ansible/playbook/taggable.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.errors import AnsibleError
from ansible.module_utils.six import string_types
from ansible.playbook.attribute import FieldAttribute
from ansible.template import Templar
class Taggable:
untagged = frozenset(['untagged'])
tags = FieldAttribute(isa='list', default=list, listof=(string_types, int), extend=True)
def _load_tags(self, attr, ds):
if isinstance(ds, list):
return ds
elif isinstance(ds, string_types):
value = ds.split(',')
if isinstance(value, list):
return [x.strip() for x in value]
else:
return [ds]
else:
raise AnsibleError('tags must be specified as a list', obj=ds)
def evaluate_tags(self, only_tags, skip_tags, all_vars):
''' this checks if the current item should be executed depending on tag options '''
if self.tags:
templar = Templar(loader=self._loader, variables=all_vars)
tags = templar.template(self.tags)
_temp_tags = set()
for tag in tags:
if isinstance(tag, list):
_temp_tags.update(tag)
else:
_temp_tags.add(tag)
tags = _temp_tags
self.tags = list(tags)
else:
# this makes isdisjoint work for untagged
tags = self.untagged
should_run = True # default, tasks to run
if only_tags:
if 'always' in tags:
should_run = True
elif ('all' in only_tags and 'never' not in tags):
should_run = True
elif not tags.isdisjoint(only_tags):
should_run = True
elif 'tagged' in only_tags and tags != self.untagged and 'never' not in tags:
should_run = True
else:
should_run = False
if should_run and skip_tags:
# Check for tags that we need to skip
if 'all' in skip_tags:
if 'always' not in tags or 'always' in skip_tags:
should_run = False
elif not tags.isdisjoint(skip_tags):
should_run = False
elif 'tagged' in skip_tags and tags != self.untagged:
should_run = False
return should_run
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,053 |
Jinja in tags resolve issue
|
### Summary
For example take this task:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
#tags: "{{ item.tags }}"
when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
Here assume the following types for the jinja templates:
item.tags: list
item.name: string
cfg.value.data: object with attributes: name and tags
When ran like this (because of the commented tags) it will work as intended with a little less functionality.
When ran as follows:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
It will error out saying that tags waits a list of class string or class int and got class list.
In normal tasks this issue does not appear, only on include_tasks and on task blocks.
The scenario where this is a drawback:
Assume we want to include generic tasks from a directory. The tasks include other tasks tagged with something. If we use the "when" directive to check tags then we have to add the include task's tag into the attribute list cause if we do not, then the include will not get evaluated which means the tasks won't get included and therefore the subtags will not be part of the playbook and it won't run as expected.
### Issue Type
Bug Report
### Component Name
task.py, block.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/levi/ansible/ansible.cfg
configured module search path = ['/home/levi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config dump --only-changed
[DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible, use mostly the same config will work, but now controlled
from the plugin itself and not using the general constant. instead. This feature will be removed from ansible-base in version 2.14. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ALLOW_WORLD_READABLE_TMPFILES(/home/levi/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/levi/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/levi/ansible/ansible.cfg) = ['/home/levi/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/levi/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/levi/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/levi/ansible/ansible.cfg) = 40
DISPLAY_SKIPPED_HOSTS(/home/levi/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/levi/ansible/ansible.cfg) = False
```
### OS / Environment
ubuntu 20.04
But this issue is really platform independent.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
vars:
cfg:
value:
data:
- tags:
- TagA
- TagB
name: "something"
```
### Expected Results
Apply the array of tags to the included packages normally.
### Actual Results
```console
Error tags must be a list of (class string or class int) but got class list
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81053
|
https://github.com/ansible/ansible/pull/81624
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
|
9b3ed5ec68a6edde5b061b18b9ebc603c3b87cc8
| 2023-06-13T17:28:44Z |
python
| 2023-10-03T19:07:26Z |
test/integration/targets/tags/runme.sh
|
#!/usr/bin/env bash
set -eux -o pipefail
# Run these using en_US.UTF-8 because list-tasks is a user output function and so it tailors its output to the
# user's locale. For unicode tags, this means replacing non-ascii chars with "?"
COMMAND=(ansible-playbook -i ../../inventory test_tags.yml -v --list-tasks)
export LC_ALL=en_US.UTF-8
# Run everything by default
[ "$("${COMMAND[@]}" | grep -F Task_with | xargs)" = \
"Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3] Task_with_meta_tags TAGS: [meta_tag]" ]
# Run the exact tags, and always
[ "$("${COMMAND[@]}" --tags tag | grep -F Task_with | xargs)" = \
"Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always]" ]
# Skip one tag
[ "$("${COMMAND[@]}" --skip-tags tag | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3] Task_with_meta_tags TAGS: [meta_tag]" ]
# Skip a unicode tag
[ "$("${COMMAND[@]}" --skip-tags 'くらとみ' | grep -F Task_with | xargs)" = \
"Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3] Task_with_meta_tags TAGS: [meta_tag]" ]
# Skip a meta task tag
[ "$("${COMMAND[@]}" --skip-tags meta_tag | grep -F Task_with | xargs)" = \
"Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_without_tag TAGS: [] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3]" ]
# Run just a unicode tag and always
[ "$("${COMMAND[@]}" --tags 'くらとみ' | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ]" ]
# Run a tag from a list of tags and always
[ "$("${COMMAND[@]}" --tags café | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_list_of_tags TAGS: [café, press]" ]
# Run tag with never
[ "$("${COMMAND[@]}" --tags donever | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_never_tag TAGS: [donever, never]" ]
# Run csv tags
[ "$("${COMMAND[@]}" --tags tag1 | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_csv_tags TAGS: [tag1, tag2]" ]
# Run templated tags
[ "$("${COMMAND[@]}" --tags tag3 | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_templated_tags TAGS: [tag3]" ]
# Run meta tags
[ "$("${COMMAND[@]}" --tags meta_tag | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_with_meta_tags TAGS: [meta_tag]" ]
# Run tagged
[ "$("${COMMAND[@]}" --tags tagged | grep -F Task_with | xargs)" = \
"Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_with_unicode_tag TAGS: [くらとみ] Task_with_list_of_tags TAGS: [café, press] Task_with_csv_tags TAGS: [tag1, tag2] Task_with_templated_tags TAGS: [tag3] Task_with_meta_tags TAGS: [meta_tag]" ]
# Run untagged
[ "$("${COMMAND[@]}" --tags untagged | grep -F Task_with | xargs)" = \
"Task_with_always_tag TAGS: [always] Task_without_tag TAGS: []" ]
# Skip 'always'
[ "$("${COMMAND[@]}" --tags untagged --skip-tags always | grep -F Task_with | xargs)" = \
"Task_without_tag TAGS: []" ]
# Test ansible_run_tags
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=all "$@"
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=all --tags all "$@"
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=list --tags tag1,tag3 "$@"
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=list --tags tag1 --tags tag3 "$@"
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=untagged --tags untagged "$@"
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=untagged_list --tags untagged,tag3 "$@"
ansible-playbook -i ../../inventory ansible_run_tags.yml -e expect=tagged --tags tagged "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,053 |
Jinja in tags resolve issue
|
### Summary
For example take this task:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
#tags: "{{ item.tags }}"
when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
Here assume the following types for the jinja templates:
item.tags: list
item.name: string
cfg.value.data: object with attributes: name and tags
When ran like this (because of the commented tags) it will work as intended with a little less functionality.
When ran as follows:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
It will error out saying that tags waits a list of class string or class int and got class list.
In normal tasks this issue does not appear, only on include_tasks and on task blocks.
The scenario where this is a drawback:
Assume we want to include generic tasks from a directory. The tasks include other tasks tagged with something. If we use the "when" directive to check tags then we have to add the include task's tag into the attribute list cause if we do not, then the include will not get evaluated which means the tasks won't get included and therefore the subtags will not be part of the playbook and it won't run as expected.
### Issue Type
Bug Report
### Component Name
task.py, block.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/levi/ansible/ansible.cfg
configured module search path = ['/home/levi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config dump --only-changed
[DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible, use mostly the same config will work, but now controlled
from the plugin itself and not using the general constant. instead. This feature will be removed from ansible-base in version 2.14. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ALLOW_WORLD_READABLE_TMPFILES(/home/levi/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/levi/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/levi/ansible/ansible.cfg) = ['/home/levi/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/levi/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/levi/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/levi/ansible/ansible.cfg) = 40
DISPLAY_SKIPPED_HOSTS(/home/levi/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/levi/ansible/ansible.cfg) = False
```
### OS / Environment
ubuntu 20.04
But this issue is really platform independent.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
vars:
cfg:
value:
data:
- tags:
- TagA
- TagB
name: "something"
```
### Expected Results
Apply the array of tags to the included packages normally.
### Actual Results
```console
Error tags must be a list of (class string or class int) but got class list
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81053
|
https://github.com/ansible/ansible/pull/81624
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
|
9b3ed5ec68a6edde5b061b18b9ebc603c3b87cc8
| 2023-06-13T17:28:44Z |
python
| 2023-10-03T19:07:26Z |
test/integration/targets/tags/test_template_parent_tags.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,053 |
Jinja in tags resolve issue
|
### Summary
For example take this task:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
#tags: "{{ item.tags }}"
when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
Here assume the following types for the jinja templates:
item.tags: list
item.name: string
cfg.value.data: object with attributes: name and tags
When ran like this (because of the commented tags) it will work as intended with a little less functionality.
When ran as follows:
```- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
```
It will error out saying that tags waits a list of class string or class int and got class list.
In normal tasks this issue does not appear, only on include_tasks and on task blocks.
The scenario where this is a drawback:
Assume we want to include generic tasks from a directory. The tasks include other tasks tagged with something. If we use the "when" directive to check tags then we have to add the include task's tag into the attribute list cause if we do not, then the include will not get evaluated which means the tasks won't get included and therefore the subtags will not be part of the playbook and it won't run as expected.
### Issue Type
Bug Report
### Component Name
task.py, block.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/levi/ansible/ansible.cfg
configured module search path = ['/home/levi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config dump --only-changed
[DEPRECATION WARNING]: ALLOW_WORLD_READABLE_TMPFILES option, moved to a per plugin approach that is more flexible, use mostly the same config will work, but now controlled
from the plugin itself and not using the general constant. instead. This feature will be removed from ansible-base in version 2.14. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ALLOW_WORLD_READABLE_TMPFILES(/home/levi/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/levi/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/levi/ansible/ansible.cfg) = ['/home/levi/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/levi/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/levi/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/levi/ansible/ansible.cfg) = 40
DISPLAY_SKIPPED_HOSTS(/home/levi/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/levi/ansible/ansible.cfg) = False
```
### OS / Environment
ubuntu 20.04
But this issue is really platform independent.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "include ..."
include_tasks:
file: something.yml
apply:
tags: "{{ item.tags }}"
#when: "item.name in ansible_run_tags or item.tags | intersect(ansible_run_tags) | count > 0 or 'all' in ansible_run_tags"
loop: "{{ cfg.value.data | flatten(levels=1) }}"
tags:
- always
vars:
cfg:
value:
data:
- tags:
- TagA
- TagB
name: "something"
```
### Expected Results
Apply the array of tags to the included packages normally.
### Actual Results
```console
Error tags must be a list of (class string or class int) but got class list
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81053
|
https://github.com/ansible/ansible/pull/81624
|
304e63d76e725e8e277fe208d26fb45ca2ff903d
|
9b3ed5ec68a6edde5b061b18b9ebc603c3b87cc8
| 2023-06-13T17:28:44Z |
python
| 2023-10-03T19:07:26Z |
test/units/playbook/test_taggable.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat import unittest
from ansible.playbook.taggable import Taggable
from units.mock.loader import DictDataLoader
class TaggableTestObj(Taggable):
def __init__(self):
self._loader = DictDataLoader({})
self.tags = []
class TestTaggable(unittest.TestCase):
def assert_evaluate_equal(self, test_value, tags, only_tags, skip_tags):
taggable_obj = TaggableTestObj()
taggable_obj.tags = tags
evaluate = taggable_obj.evaluate_tags(only_tags, skip_tags, {})
self.assertEqual(test_value, evaluate)
def test_evaluate_tags_tag_in_only_tags(self):
self.assert_evaluate_equal(True, ['tag1', 'tag2'], ['tag1'], [])
def test_evaluate_tags_tag_in_skip_tags(self):
self.assert_evaluate_equal(False, ['tag1', 'tag2'], [], ['tag1'])
def test_evaluate_tags_special_always_in_object_tags(self):
self.assert_evaluate_equal(True, ['tag', 'always'], ['random'], [])
def test_evaluate_tags_tag_in_skip_tags_special_always_in_object_tags(self):
self.assert_evaluate_equal(False, ['tag', 'always'], ['random'], ['tag'])
def test_evaluate_tags_special_always_in_skip_tags_and_always_in_tags(self):
self.assert_evaluate_equal(False, ['tag', 'always'], [], ['always'])
def test_evaluate_tags_special_tagged_in_only_tags_and_object_tagged(self):
self.assert_evaluate_equal(True, ['tag'], ['tagged'], [])
def test_evaluate_tags_special_tagged_in_only_tags_and_object_untagged(self):
self.assert_evaluate_equal(False, [], ['tagged'], [])
def test_evaluate_tags_special_tagged_in_skip_tags_and_object_tagged(self):
self.assert_evaluate_equal(False, ['tag'], [], ['tagged'])
def test_evaluate_tags_special_tagged_in_skip_tags_and_object_untagged(self):
self.assert_evaluate_equal(True, [], [], ['tagged'])
def test_evaluate_tags_special_untagged_in_only_tags_and_object_tagged(self):
self.assert_evaluate_equal(False, ['tag'], ['untagged'], [])
def test_evaluate_tags_special_untagged_in_only_tags_and_object_untagged(self):
self.assert_evaluate_equal(True, [], ['untagged'], [])
def test_evaluate_tags_special_untagged_in_skip_tags_and_object_tagged(self):
self.assert_evaluate_equal(True, ['tag'], [], ['untagged'])
def test_evaluate_tags_special_untagged_in_skip_tags_and_object_untagged(self):
self.assert_evaluate_equal(False, [], [], ['untagged'])
def test_evaluate_tags_special_all_in_only_tags(self):
self.assert_evaluate_equal(True, ['tag'], ['all'], ['untagged'])
def test_evaluate_tags_special_all_in_only_tags_and_object_untagged(self):
self.assert_evaluate_equal(True, [], ['all'], [])
def test_evaluate_tags_special_all_in_skip_tags(self):
self.assert_evaluate_equal(False, ['tag'], ['tag'], ['all'])
def test_evaluate_tags_special_all_in_only_tags_and_special_all_in_skip_tags(self):
self.assert_evaluate_equal(False, ['tag'], ['all'], ['all'])
def test_evaluate_tags_special_all_in_skip_tags_and_always_in_object_tags(self):
self.assert_evaluate_equal(True, ['tag', 'always'], [], ['all'])
def test_evaluate_tags_special_all_in_skip_tags_and_special_always_in_skip_tags_and_always_in_object_tags(self):
self.assert_evaluate_equal(False, ['tag', 'always'], [], ['all', 'always'])
def test_evaluate_tags_accepts_lists(self):
self.assert_evaluate_equal(True, ['tag1', 'tag2'], ['tag2'], [])
def test_evaluate_tags_with_repeated_tags(self):
self.assert_evaluate_equal(False, ['tag', 'tag'], [], ['tag'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,188 |
yum module fails with Error: Module unable to decode valid JSON on stdin.
|
### Summary
I have a role that hasn't been changed recently and still works correctly on most hosts (CentOS, Oracle Linux en Suse).
This role, if it detects a Redhat, will call ansible.builtin.yum
And this has always worked .. until suddenly one Oracle Linux 7.9 hosts fails this task with the error: "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"
```
TASK [zabbix-agent : Install package zabbix_agent2] ****************************
fatal: [xxx]: FAILED! => {"changed": false, "msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"}
```
In the system logging of that host I see no problems, and I see the parameters parsed just fine for as far as I understand it:
```
Jul 7 17:39:20 xxx ansible-ansible.legacy.yum: Invoked with name=['zabbix-agent2'] state=latest update_cache=True enablerepo=['\\*zabbix\\*'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
```
I don't see this problem on other Oracle Linux or CentOS hosts. I have no clue on how to debug this? Or what could be the cause for this ?
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, Dec 8 2022, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
NAME="Oracle Linux Server"
VERSION="7.9"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Oracle Linux Server 7.9"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:9:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.9
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package {{ zbx_agent_generation }}
become: true
ansible.builtin.yum:
name: "{{ __zbx_agent_packages[zbx_agent_generation] }}"
state: latest
update_cache: true
enablerepo: "\\*zabbix\\*"
retries: 3
notify: restart zabbix-agent
```
### Expected Results
The requested package to be installed
### Actual Results
```console
<xxx> ESTABLISH SSH CONNECTION FOR USER: ops
<xxx> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ops"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/28b2e02311"' -tt xxx '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-cceddylaroytysqkptljeuuefrajsuvd ; /usr/bin/python3.6 /home/ops/.ansible/tmp/ansible-tmp-1688745858.7127638-451-200846650582138/AnsiballZ_yum.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<xxx> (1, b'\r\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}\r\n', b'Shared connection to xxx closed.\r\n')
<xxx> Failed to connect to the host via ssh: Shared connection to xxx closed.
<xxx> ESTABLISH SSH CONNECTION FOR USER: ops
<xxx> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ops"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/28b2e02311"' xxx '/bin/sh -c '"'"'rm -f -r /home/ops/.ansible/tmp/ansible-tmp-1688745858.7127638-451-200846650582138/ > /dev/null 2>&1 && sleep 0'"'"''
<xxx> (0, b'', b'')
fatal: [xxx]: FAILED! => {
"changed": false,
"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81188
|
https://github.com/ansible/ansible/pull/81554
|
d67d8bd823d588e5f617aba25ed43e96ee32466f
|
c0eefa955a7292ba61fe6656eba51ebbf97e553e
| 2023-07-07T16:09:00Z |
python
| 2023-10-04T14:49:03Z |
changelogs/fragments/81188_better_error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,188 |
yum module fails with Error: Module unable to decode valid JSON on stdin.
|
### Summary
I have a role that hasn't been changed recently and still works correctly on most hosts (CentOS, Oracle Linux en Suse).
This role, if it detects a Redhat, will call ansible.builtin.yum
And this has always worked .. until suddenly one Oracle Linux 7.9 hosts fails this task with the error: "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"
```
TASK [zabbix-agent : Install package zabbix_agent2] ****************************
fatal: [xxx]: FAILED! => {"changed": false, "msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"}
```
In the system logging of that host I see no problems, and I see the parameters parsed just fine for as far as I understand it:
```
Jul 7 17:39:20 xxx ansible-ansible.legacy.yum: Invoked with name=['zabbix-agent2'] state=latest update_cache=True enablerepo=['\\*zabbix\\*'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
```
I don't see this problem on other Oracle Linux or CentOS hosts. I have no clue on how to debug this? Or what could be the cause for this ?
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, Dec 8 2022, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
NAME="Oracle Linux Server"
VERSION="7.9"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Oracle Linux Server 7.9"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:9:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.9
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package {{ zbx_agent_generation }}
become: true
ansible.builtin.yum:
name: "{{ __zbx_agent_packages[zbx_agent_generation] }}"
state: latest
update_cache: true
enablerepo: "\\*zabbix\\*"
retries: 3
notify: restart zabbix-agent
```
### Expected Results
The requested package to be installed
### Actual Results
```console
<xxx> ESTABLISH SSH CONNECTION FOR USER: ops
<xxx> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ops"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/28b2e02311"' -tt xxx '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-cceddylaroytysqkptljeuuefrajsuvd ; /usr/bin/python3.6 /home/ops/.ansible/tmp/ansible-tmp-1688745858.7127638-451-200846650582138/AnsiballZ_yum.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<xxx> (1, b'\r\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}\r\n', b'Shared connection to xxx closed.\r\n')
<xxx> Failed to connect to the host via ssh: Shared connection to xxx closed.
<xxx> ESTABLISH SSH CONNECTION FOR USER: ops
<xxx> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ops"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/28b2e02311"' xxx '/bin/sh -c '"'"'rm -f -r /home/ops/.ansible/tmp/ansible-tmp-1688745858.7127638-451-200846650582138/ > /dev/null 2>&1 && sleep 0'"'"''
<xxx> (0, b'', b'')
fatal: [xxx]: FAILED! => {
"changed": false,
"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81188
|
https://github.com/ansible/ansible/pull/81554
|
d67d8bd823d588e5f617aba25ed43e96ee32466f
|
c0eefa955a7292ba61fe6656eba51ebbf97e553e
| 2023-07-07T16:09:00Z |
python
| 2023-10-04T14:49:03Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import annotations
import json
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY_MIN = (3, 7)
if sys.version_info < _PY_MIN:
print(json.dumps(dict(
failed=True,
msg=f"ansible-core requires a minimum of Python version {'.'.join(map(str, _PY_MIN))}. Current version: {''.join(sys.version.splitlines())}",
)))
sys.exit(1)
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import tempfile
import time
import traceback
import types
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal, daemon as systemd_daemon
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
# check if the system is running under systemd
has_journal = hasattr(journal, 'sendv') and systemd_daemon.booted()
except (ImportError, AttributeError):
# AttributeError would be caused from use of .booted() if wrong systemd
has_journal = False
HAVE_SELINUX = False
try:
from ansible.module_utils.compat import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
import hashlib
def _get_available_hash_algorithms():
"""Return a dictionary of available hash function names and their associated function."""
try:
# Algorithms available in Python 2.7.9+ and Python 3.2+
# https://docs.python.org/2.7/library/hashlib.html#hashlib.algorithms_available
# https://docs.python.org/3.2/library/hashlib.html#hashlib.algorithms_available
algorithm_names = hashlib.algorithms_available
except AttributeError:
# Algorithms in Python 2.7.x (used only for Python 2.7.0 through 2.7.8)
# https://docs.python.org/2.7/library/hashlib.html#hashlib.hashlib.algorithms
algorithm_names = set(hashlib.algorithms)
algorithms = {}
for algorithm_name in algorithm_names:
algorithm_func = getattr(hashlib, algorithm_name, None)
if algorithm_func:
try:
# Make sure the algorithm is actually available for use.
# Not all algorithms listed as available are actually usable.
# For example, md5 is not available in FIPS mode.
algorithm_func()
except Exception:
pass
else:
algorithms[algorithm_name] = algorithm_func
return algorithms
AVAILABLE_HASH_ALGORITHMS = _get_available_hash_algorithms()
from ansible.module_utils.six.moves.collections_abc import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
FILE_ATTRIBUTES,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
env_fallback,
remove_values,
sanitize_keys,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode # type: ignore[used-before-def] # pylint: disable=used-before-assignment
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring # type: ignore[used-before-def,has-type] # pylint: disable=used-before-assignment
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'^[ugo]+$')
PERMS_RE = re.compile(r'^[rwxXstugo]*$')
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:prev_begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
# Save parameter values that should never be logged
self.no_log_values = set()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._load_params()
self._set_internal_properties()
self.validator = ModuleArgumentSpecValidator(self.argument_spec,
self.mutually_exclusive,
self.required_together,
self.required_one_of,
self.required_if,
self.required_by,
)
self.validation_result = self.validator.validate(self.params)
self.params.update(self.validation_result.validated_parameters)
self.no_log_values.update(self.validation_result._no_log_values)
self.aliases.update(self.validation_result._aliases)
try:
error = self.validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = self.validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg)
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not self.no_log:
self._log_invocation()
# selinux state caching
self._selinux_enabled = None
self._selinux_mls_enabled = None
self._selinux_initial_context = None
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if self._selinux_mls_enabled is None:
self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1
return self._selinux_mls_enabled
def selinux_enabled(self):
if self._selinux_enabled is None:
self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1
return self._selinux_enabled
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
if self._selinux_initial_context is None:
self._selinux_initial_context = [None, None, None]
if self.selinux_mls_enabled():
self._selinux_initial_context.append(None)
return self._selinux_initial_context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
'''
Takes a path and returns it's mount point
:param path: a string type with a filesystem path
:returns: the path to the mount point as a text type
'''
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
path_stat = os.lstat(b_path)
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if not USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if not PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask, new_mode)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask, prev_mode=None):
if prev_mode is None:
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# https://docs.python.org/3/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'best' locale, per the function
# final fallback is 'C', which may cause unicode issues
# but is preferable to simply failing on unknown locale
best_locale = get_best_parsable_locale(self)
# need to set several since many tools choose to ignore documented precedence and scope
locale.setlocale(locale.LC_ALL, best_locale)
os.environ['LANG'] = best_locale
os.environ['LC_ALL'] = best_locale
os.environ['LC_MESSAGES'] = best_locale
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
name, value = (arg.upper(), str(log_args[arg]))
if name in (
'PRIORITY', 'MESSAGE', 'MESSAGE_ID',
'CODE_FILE', 'CODE_LINE', 'CODE_FUNC',
'SYSLOG_FACILITY', 'SYSLOG_IDENTIFIER',
'SYSLOG_PID',
):
name = "_%s" % name
journal_args.append((name, value))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True, handle_exceptions=True):
'''
Execute a command, returns rc, stdout, and stderr.
The mechanism of this method for reading stdout and stderr differs from
that of CPython subprocess.Popen.communicate, in that this method will
stop reading once the spawned command has exited and stdout and stderr
have been consumed, as opposed to waiting until stdout/stderr are
closed. This can be an important distinction, when taken into account
that a forked or backgrounded process may hold stdout or stderr open
for longer than the spawned command.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* environ variables with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:kw handle_exceptions: This flag indicates whether an exception will
be handled inline and issue a failed_json or if the caller should
handle it.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
env = os.environ.copy()
# We can set this from both an attribute and per call
env.update(self.run_command_environ_update or {})
env.update(environ_update or {})
if path_prefix:
path = env.get('PATH', '')
if path:
env['PATH'] = "%s:%s" % (path_prefix, path)
else:
env['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in env:
pypaths = [x for x in env['PYTHONPATH'].split(':')
if x and
not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
if pypaths and any(pypaths):
env['PYTHONPATH'] = ':'.join(pypaths)
if data:
st_in = subprocess.PIPE
def preexec():
self._restore_signal_handlers()
if umask:
os.umask(umask)
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=preexec,
env=env,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# make sure we're in the right working directory
if cwd:
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
if os.path.isdir(cwd):
kwargs['cwd'] = cwd
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
stdout = b''
stderr = b''
# Mirror the CPython subprocess logic and preference for the selector to use.
# poll/select have the advantage of not requiring any extra file
# descriptor, contrarily to epoll/kqueue (also, they require a single
# syscall).
if hasattr(selectors, 'PollSelector'):
selector = selectors.PollSelector()
else:
selector = selectors.SelectSelector()
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
cmd.stdin.write(data)
cmd.stdin.close()
while True:
# A timeout of 1 is both a little short and a little long.
# With None we could deadlock, with a lower value we would
# waste cycles. As it is, this is a mild inconvenience if
# we need to exit, and likely doesn't waste too many cycles
events = selector.select(1)
stdout_changed = False
for key, event in events:
b_chunk = key.fileobj.read(32768)
if not b_chunk:
selector.unregister(key.fileobj)
elif key.fileobj == cmd.stdout:
stdout += b_chunk
stdout_changed = True
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now, but only if stdout
# actually changed since the last loop
if prompt_re and stdout_changed and prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# break out if no pipes are left to read or the pipes are completely read
# and the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
if handle_exceptions:
self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args))
else:
raise e
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
if handle_exceptions:
self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
else:
raise e
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,901 |
ansible-galaxy failed with AttributeError
|
### Summary
While specifying requirements.yml to install roles like -
```
ansible-galaxy role install -r requirements.yml -vvvv
```
With `requirements.yml` (I understand this file syntax is wrong)
```yaml
---
community.vmware
```
results in
```
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.0.dev0] (i81713 310625996d) last updated 2023/10/04 11:27:33 (GMT -400)
config file = /Volumes/data/src/playbooks/ansible.cfg
configured module search path = ['/Users/akasurde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Volumes/data/src/ansible/lib/ansible
ansible collection location = /Users/akasurde/.ansible/collections:/usr/share/ansible/collections
executable location = /Volumes/data/src/ansible/bin/ansible
python version = 3.11.3 (main, May 10 2023, 12:50:08) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/Users/akasurde/.pyenv/versions/3.11.3/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Red Hat Enterprise Linux release 8.2 (Ootpa)
### Steps to Reproduce
Trying installing role/collection with the above requirements.yml file.
### Expected Results
Installation successful.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81901
|
https://github.com/ansible/ansible/pull/81917
|
976067c15fea8c416fc41d264a221535c6f38872
|
8a5ccc9d63ab528b579c14c4519c70c6838c7d6c
| 2023-10-04T19:38:12Z |
python
| 2023-10-05T19:03:01Z |
changelogs/fragments/81901-galaxy-requirements-format.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,901 |
ansible-galaxy failed with AttributeError
|
### Summary
While specifying requirements.yml to install roles like -
```
ansible-galaxy role install -r requirements.yml -vvvv
```
With `requirements.yml` (I understand this file syntax is wrong)
```yaml
---
community.vmware
```
results in
```
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.0.dev0] (i81713 310625996d) last updated 2023/10/04 11:27:33 (GMT -400)
config file = /Volumes/data/src/playbooks/ansible.cfg
configured module search path = ['/Users/akasurde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Volumes/data/src/ansible/lib/ansible
ansible collection location = /Users/akasurde/.ansible/collections:/usr/share/ansible/collections
executable location = /Volumes/data/src/ansible/bin/ansible
python version = 3.11.3 (main, May 10 2023, 12:50:08) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/Users/akasurde/.pyenv/versions/3.11.3/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Red Hat Enterprise Linux release 8.2 (Ootpa)
### Steps to Reproduce
Trying installing role/collection with the above requirements.yml file.
### Expected Results
Installation successful.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81901
|
https://github.com/ansible/ansible/pull/81917
|
976067c15fea8c416fc41d264a221535c6f38872
|
8a5ccc9d63ab528b579c14c4519c70c6838c7d6c
| 2023-10-04T19:38:12Z |
python
| 2023-10-05T19:03:01Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import annotations
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import argparse
import functools
import json
import os.path
import pathlib
import re
import shutil
import sys
import textwrap
import time
import typing as t
from dataclasses import dataclass
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI, GalaxyError
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
# config definition by position: name, required, type
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('api_version', False, 'int'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
('timeout', False, 'int'),
]
# config definition fields
SERVER_ADDITIONAL = {
'api_version': {'default': None, 'choices': [2, 3]},
'validate_certs': {'cli': [{'name': 'validate_certs'}]},
'timeout': {'default': C.GALAXY_SERVER_TIMEOUT, 'cli': [{'name': 'timeout'}]},
'token': {'default': None},
}
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
@functools.wraps(wrapped_method)
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
# FIXME: use validate_certs context from Galaxy servers when downloading collections
# .get used here for when this is used in a non-CLI context
artifacts_manager_kwargs = {'validate_certs': context.CLIARGS.get('resolved_validate_certs', True)}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set or [''], key=len))
version_length = len(max(version_set or [''], key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError(f"{value} is not a valid signature count value")
return value
@dataclass
class RoleDistributionServer:
_api: t.Union[GalaxyAPI, None]
api_servers: list[GalaxyAPI]
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
class GalaxyCLI(CLI):
'''Command to manage Ansible roles and collections.
None of the CLI tools are designed to run concurrently with themselves.
Use an external scheduler and/or locking to ensure there are no clashing operations.
'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self.lazy_role_api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--api-version', type=int, choices=[2, 3], help=argparse.SUPPRESS) # Hidden argument that should only be used in our tests
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None)
# --timeout uses the default None to handle two different scenarios.
# * --timeout > C.GALAXY_SERVER_TIMEOUT for non-configured servers
# * --timeout > server-specific timeout > C.GALAXY_SERVER_TIMEOUT for configured servers.
common.add_argument('--timeout', dest='timeout', type=int,
help="The time to wait for operations against the galaxy server, defaults to 60s.")
opt_help.add_verbosity_options(common)
force = opt_help.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection.set_defaults(func=self.execute_collection) # to satisfy doc build
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role.set_defaults(func=self.execute_role) # to satisfy doc build
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A space separated list of status codes to ignore during signature verification (for example, NO_PUBKEY FAILURE). ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).' \
'Note: specify these after positional arguments or use -- to separate them.'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
verify_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A space separated list of status codes to ignore during signature verification (for example, NO_PUBKEY FAILURE). ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).' \
'Note: specify these after positional arguments or use -- to separate them.'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Install collection artifacts (tarballs) without contacting any distribution servers. '
'This does not apply to collections in remote Git repositories or URLs to remote tarballs.'
)
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
r_re = re.compile(r'^(?<!-)-[a-zA-Z]*r[a-zA-Z]*') # -r, -fr
contains_r = bool([a for a in self._raw_args if r_re.match(a)])
role_file_re = re.compile(r'--role-file($|=)') # --role-file foo, --role-file=foo
contains_role_file = bool([a for a in self._raw_args if role_file_re.match(a)])
if self._implicit_role and (contains_r or contains_role_file):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=opt_help.argparse.SUPPRESS, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('--ignore-signature-status-codes', dest='ignore_gpg_errors', type=str, action='extend', nargs='+',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
# ensure we have 'usable' cli option
setattr(options, 'validate_certs', (None if options.ignore_certs is None else not options.ignore_certs))
# the default if validate_certs is None
setattr(options, 'resolved_validate_certs', (options.validate_certs if options.validate_certs is not None else not C.GALAXY_IGNORE_CERTS))
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
config_def = {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
if key in SERVER_ADDITIONAL:
config_def.update(SERVER_ADDITIONAL[key])
return config_def
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Abuse the 'plugin config' by making 'galaxy_server' a type of plugin
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
# resolve the config created options above with existing config and user options
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here
auth_url = server_options.pop('auth_url')
client_id = server_options.pop('client_id')
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
api_version = server_options.pop('api_version')
if server_options['validate_certs'] is None:
server_options['validate_certs'] = context.CLIARGS['resolved_validate_certs']
validate_certs = server_options['validate_certs']
# This allows a user to explicitly force use of an API version when
# multiple versions are supported. This was added for testing
# against pulp_ansible and I'm not sure it has a practical purpose
# outside of this use case. As such, this option is not documented
# as of now
if api_version:
display.warning(
f'The specified "api_version" configuration for the galaxy server "{server_key}" is '
'not a public configuration, and may be removed at any time without warning.'
)
server_options['available_api_versions'] = {'v%s' % api_version: '/v%s' % api_version}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username, server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
if context.CLIARGS['api_version']:
api_version = context.CLIARGS['api_version']
display.warning(
'The --api-version is not a public argument, and may be removed at any time without warning.'
)
galaxy_options['available_api_versions'] = {'v%s' % api_version: '/v%s' % api_version}
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
validate_certs = context.CLIARGS['resolved_validate_certs']
default_server_timeout = context.CLIARGS['timeout'] if context.CLIARGS['timeout'] is not None else C.GALAXY_SERVER_TIMEOUT
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs,
timeout=default_server_timeout,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs,
timeout=default_server_timeout,
**galaxy_options
))
# checks api versions once a GalaxyRole makes an api call
# self.api can be used to evaluate the best server immediately
self.lazy_role_api = RoleDistributionServer(None, self.api_servers)
return context.CLIARGS['func']()
@property
def api(self):
return self.lazy_role_api.api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.lazy_role_api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.lazy_role_api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=context.CLIARGS['resolved_validate_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
"""Download collections and their dependencies as a tarball for an offline install."""
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
# delete the contents rather than the collection root in case init was run from the root (--init-path ../../)
for root, dirs, files in os.walk(b_obj_path, topdown=True):
for old_dir in dirs:
path = os.path.join(root, old_dir)
shutil.rmtree(path)
for old_file in files:
path = os.path.join(root, old_file)
os.unlink(path)
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path), follow_symlinks=False)
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if os.path.exists(b_dir_path):
continue
b_src_dir = to_bytes(os.path.join(root, d), errors='surrogate_or_strict')
if os.path.islink(b_src_dir):
shutil.copyfile(b_src_dir, b_dir_path, follow_symlinks=False)
else:
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except GalaxyError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
else:
data = u"- the role %s was not found" % role
break
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
"""Compare checksums with the collection(s) found on the server and the installed copy. This does not verify dependencies."""
collections = context.CLIARGS['args']
search_paths = AnsibleCollectionConfig.collection_paths
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.lazy_role_api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
try:
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
except KeyError:
if self._implicit_role:
raise AnsibleError(
'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" '
'instead of "ansible-galaxy install".'
)
raise
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
managed_paths = set(validate_collection_path(p) for p in C.COLLECTIONS_PATHS)
read_req_paths = set(validate_collection_path(p) for p in AnsibleCollectionConfig.collection_paths)
unexpected_path = C.GALAXY_COLLECTIONS_PATH_WARNING and not any(p.startswith(path) for p in managed_paths)
if unexpected_path and any(p.startswith(path) for p in read_req_paths):
display.warning(
f"The specified collections path '{path}' appears to be part of the pip Ansible package. "
"Managing these directly with ansible-galaxy could break the Ansible package. "
"Install collections to a configured collections path, which will take precedence over "
"collections found in the PYTHONPATH."
)
elif unexpected_path:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection will not be picked up in an Ansible "
"run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
offline=context.CLIARGS.get('offline', False),
read_requirement_paths=read_req_paths,
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
# NOTE: the meta file is also required for installing the role, not just dependencies
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata_dependencies + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.lazy_role_api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.lazy_role_api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError(
"- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])
)
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
if artifacts_manager is not None:
artifacts_manager.require_build_metadata = False
output_format = context.CLIARGS['output_format']
collection_name = context.CLIARGS['collection']
default_collections_path = set(C.COLLECTIONS_PATHS)
collections_search_paths = (
set(context.CLIARGS['collections_path'] or []) | default_collections_path | set(AnsibleCollectionConfig.collection_paths)
)
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
namespace_filter = None
collection_filter = None
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace_filter, collection_filter = collection_name.split('.')
collections = list(find_existing_collections(
list(collections_search_paths),
artifacts_manager,
namespace_filter=namespace_filter,
collection_filter=collection_filter,
dedupe=False
))
seen = set()
fqcn_width, version_width = _get_collection_widths(collections)
for collection in sorted(collections, key=lambda c: c.src):
collection_found = True
collection_path = pathlib.Path(to_text(collection.src)).parent.parent.as_posix()
if output_format in {'yaml', 'json'}:
collections_in_paths.setdefault(collection_path, {})
collections_in_paths[collection_path][collection.fqcn] = {'version': collection.ver}
else:
if collection_path not in seen:
_display_header(
collection_path,
'Collection',
'Version',
fqcn_width,
version_width
)
seen.add(collection_path)
_display_collection(collection, fqcn_width, version_width)
path_found = False
for path in collections_search_paths:
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(path))
elif os.path.exists(path) and not os.path.isdir(path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(path))
else:
path_found = True
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not collections and not path_found:
raise AnsibleOptionsError(
"- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])
)
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.warning("No roles match your search.")
return 0
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return 0
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,901 |
ansible-galaxy failed with AttributeError
|
### Summary
While specifying requirements.yml to install roles like -
```
ansible-galaxy role install -r requirements.yml -vvvv
```
With `requirements.yml` (I understand this file syntax is wrong)
```yaml
---
community.vmware
```
results in
```
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.0.dev0] (i81713 310625996d) last updated 2023/10/04 11:27:33 (GMT -400)
config file = /Volumes/data/src/playbooks/ansible.cfg
configured module search path = ['/Users/akasurde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Volumes/data/src/ansible/lib/ansible
ansible collection location = /Users/akasurde/.ansible/collections:/usr/share/ansible/collections
executable location = /Volumes/data/src/ansible/bin/ansible
python version = 3.11.3 (main, May 10 2023, 12:50:08) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/Users/akasurde/.pyenv/versions/3.11.3/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Red Hat Enterprise Linux release 8.2 (Ootpa)
### Steps to Reproduce
Trying installing role/collection with the above requirements.yml file.
### Expected Results
Installation successful.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/Volumes/data/src/ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 749, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 120, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 1368, in execute_install
requirements = self._parse_requirements_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/data/src/ansible/bin/ansible-galaxy", line 840, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81901
|
https://github.com/ansible/ansible/pull/81917
|
976067c15fea8c416fc41d264a221535c6f38872
|
8a5ccc9d63ab528b579c14c4519c70c6838c7d6c
| 2023-10-04T19:38:12Z |
python
| 2023-10-05T19:03:01Z |
test/units/cli/test_galaxy.py
|
# -*- coding: utf-8 -*-
# (c) 2016, Adrian Likins <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import annotations
import contextlib
import ansible
from io import BytesIO
import json
import os
import pytest
import shutil
import stat
import tarfile
import tempfile
import yaml
import ansible.constants as C
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.galaxy import collection
from ansible.galaxy.api import GalaxyAPI
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.utils import context_objects as co
from ansible.utils.display import Display
from units.compat import unittest
from unittest.mock import patch, MagicMock
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
class TestGalaxy(unittest.TestCase):
@classmethod
def setUpClass(cls):
'''creating prerequisites for installing a role; setUpClass occurs ONCE whereas setUp occurs with every method tested.'''
# class data for easy viewing: role_dir, role_tar, role_name, role_req, role_path
cls.temp_dir = tempfile.mkdtemp(prefix='ansible-test_galaxy-')
os.chdir(cls.temp_dir)
shutil.rmtree("./delete_me", ignore_errors=True)
# creating framework for a role
gc = GalaxyCLI(args=["ansible-galaxy", "init", "--offline", "delete_me"])
gc.run()
cls.role_dir = "./delete_me"
cls.role_name = "delete_me"
# making a temp dir for role installation
cls.role_path = os.path.join(tempfile.mkdtemp(), "roles")
os.makedirs(cls.role_path)
# creating a tar file name for class data
cls.role_tar = './delete_me.tar.gz'
cls.makeTar(cls.role_tar, cls.role_dir)
# creating a temp file with installation requirements
cls.role_req = './delete_me_requirements.yml'
with open(cls.role_req, "w") as fd:
fd.write("- 'src': '%s'\n 'name': '%s'\n 'path': '%s'" % (cls.role_tar, cls.role_name, cls.role_path))
@classmethod
def makeTar(cls, output_file, source_dir):
''' used for making a tarfile from a role directory '''
# adding directory into a tar file
with tarfile.open(output_file, "w:gz") as tar:
tar.add(source_dir, arcname=os.path.basename(source_dir))
@classmethod
def tearDownClass(cls):
'''After tests are finished removes things created in setUpClass'''
# deleting the temp role directory
shutil.rmtree(cls.role_dir, ignore_errors=True)
with contextlib.suppress(FileNotFoundError):
os.remove(cls.role_req)
with contextlib.suppress(FileNotFoundError):
os.remove(cls.role_tar)
shutil.rmtree(cls.role_path, ignore_errors=True)
os.chdir('/')
shutil.rmtree(cls.temp_dir, ignore_errors=True)
def setUp(self):
# Reset the stored command line args
co.GlobalCLIArgs._Singleton__instance = None
self.default_args = ['ansible-galaxy']
def tearDown(self):
# Reset the stored command line args
co.GlobalCLIArgs._Singleton__instance = None
def test_init(self):
galaxy_cli = GalaxyCLI(args=self.default_args)
self.assertTrue(isinstance(galaxy_cli, GalaxyCLI))
def test_display_min(self):
gc = GalaxyCLI(args=self.default_args)
role_info = {'name': 'some_role_name'}
display_result = gc._display_role_info(role_info)
self.assertTrue(display_result.find('some_role_name') > -1)
def test_display_galaxy_info(self):
gc = GalaxyCLI(args=self.default_args)
galaxy_info = {}
role_info = {'name': 'some_role_name',
'galaxy_info': galaxy_info}
display_result = gc._display_role_info(role_info)
self.assertNotEqual(display_result.find('\n\tgalaxy_info:'), -1, 'Expected galaxy_info to be indented once')
def test_run(self):
''' verifies that the GalaxyCLI object's api is created and that execute() is called. '''
gc = GalaxyCLI(args=["ansible-galaxy", "install", "--ignore-errors", "imaginary_role"])
gc.parse()
with patch.object(ansible.cli.CLI, "run", return_value=None) as mock_run:
gc.run()
# testing
self.assertIsInstance(gc.galaxy, ansible.galaxy.Galaxy)
self.assertEqual(mock_run.call_count, 1)
self.assertTrue(isinstance(gc.api, ansible.galaxy.api.GalaxyAPI))
def test_execute_remove(self):
# installing role
gc = GalaxyCLI(args=["ansible-galaxy", "install", "-p", self.role_path, "-r", self.role_req, '--force'])
gc.run()
# location where the role was installed
role_file = os.path.join(self.role_path, self.role_name)
# removing role
# Have to reset the arguments in the context object manually since we're doing the
# equivalent of running the command line program twice
co.GlobalCLIArgs._Singleton__instance = None
gc = GalaxyCLI(args=["ansible-galaxy", "remove", role_file, self.role_name])
gc.run()
# testing role was removed
removed_role = not os.path.exists(role_file)
self.assertTrue(removed_role)
def test_exit_without_ignore_without_flag(self):
''' tests that GalaxyCLI exits with the error specified if the --ignore-errors flag is not used '''
gc = GalaxyCLI(args=["ansible-galaxy", "install", "--server=None", "fake_role_name"])
with patch.object(ansible.utils.display.Display, "display", return_value=None) as mocked_display:
# testing that error expected is raised
self.assertRaises(AnsibleError, gc.run)
assert mocked_display.call_count == 2
assert mocked_display.mock_calls[0].args[0] == "Starting galaxy role install process"
assert "fake_role_name was NOT installed successfully" in mocked_display.mock_calls[1].args[0]
def test_exit_without_ignore_with_flag(self):
''' tests that GalaxyCLI exits without the error specified if the --ignore-errors flag is used '''
# testing with --ignore-errors flag
gc = GalaxyCLI(args=["ansible-galaxy", "install", "--server=None", "fake_role_name", "--ignore-errors"])
with patch.object(ansible.utils.display.Display, "display", return_value=None) as mocked_display:
gc.run()
assert mocked_display.call_count == 2
assert mocked_display.mock_calls[0].args[0] == "Starting galaxy role install process"
assert "fake_role_name was NOT installed successfully" in mocked_display.mock_calls[1].args[0]
def test_parse_no_action(self):
''' testing the options parser when no action is given '''
gc = GalaxyCLI(args=["ansible-galaxy", ""])
self.assertRaises(SystemExit, gc.parse)
def test_parse_invalid_action(self):
''' testing the options parser when an invalid action is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "NOT_ACTION"])
self.assertRaises(SystemExit, gc.parse)
def test_parse_delete(self):
''' testing the options parser when the action 'delete' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "delete", "foo", "bar"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_import(self):
''' testing the options parser when the action 'import' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "import", "foo", "bar"])
gc.parse()
self.assertEqual(context.CLIARGS['wait'], True)
self.assertEqual(context.CLIARGS['reference'], None)
self.assertEqual(context.CLIARGS['check_status'], False)
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_info(self):
''' testing the options parser when the action 'info' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "info", "foo", "bar"])
gc.parse()
self.assertEqual(context.CLIARGS['offline'], False)
def test_parse_init(self):
''' testing the options parser when the action 'init' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "init", "foo"])
gc.parse()
self.assertEqual(context.CLIARGS['offline'], False)
self.assertEqual(context.CLIARGS['force'], False)
def test_parse_install(self):
''' testing the options parser when the action 'install' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "install"])
gc.parse()
self.assertEqual(context.CLIARGS['ignore_errors'], False)
self.assertEqual(context.CLIARGS['no_deps'], False)
self.assertEqual(context.CLIARGS['requirements'], None)
self.assertEqual(context.CLIARGS['force'], False)
def test_parse_list(self):
''' testing the options parser when the action 'list' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "list"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_remove(self):
''' testing the options parser when the action 'remove' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "remove", "foo"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
def test_parse_search(self):
''' testing the options parswer when the action 'search' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "search"])
gc.parse()
self.assertEqual(context.CLIARGS['platforms'], None)
self.assertEqual(context.CLIARGS['galaxy_tags'], None)
self.assertEqual(context.CLIARGS['author'], None)
def test_parse_setup(self):
''' testing the options parser when the action 'setup' is given '''
gc = GalaxyCLI(args=["ansible-galaxy", "setup", "source", "github_user", "github_repo", "secret"])
gc.parse()
self.assertEqual(context.CLIARGS['verbosity'], 0)
self.assertEqual(context.CLIARGS['remove_id'], None)
self.assertEqual(context.CLIARGS['setup_list'], False)
class ValidRoleTests(object):
expected_role_dirs = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests')
@classmethod
def setUpRole(cls, role_name, galaxy_args=None, skeleton_path=None, use_explicit_type=False):
if galaxy_args is None:
galaxy_args = []
if skeleton_path is not None:
cls.role_skeleton_path = skeleton_path
galaxy_args += ['--role-skeleton', skeleton_path]
# Make temp directory for testing
cls.test_dir = tempfile.mkdtemp()
cls.role_dir = os.path.join(cls.test_dir, role_name)
cls.role_name = role_name
# create role using default skeleton
args = ['ansible-galaxy']
if use_explicit_type:
args += ['role']
args += ['init', '-c', '--offline'] + galaxy_args + ['--init-path', cls.test_dir, cls.role_name]
gc = GalaxyCLI(args=args)
gc.run()
cls.gc = gc
if skeleton_path is None:
cls.role_skeleton_path = gc.galaxy.default_role_skeleton_path
@classmethod
def tearDownRole(cls):
shutil.rmtree(cls.test_dir, ignore_errors=True)
def test_metadata(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertIn('galaxy_info', metadata, msg='unable to find galaxy_info in metadata')
self.assertIn('dependencies', metadata, msg='unable to find dependencies in metadata')
def test_readme(self):
readme_path = os.path.join(self.role_dir, 'README.md')
self.assertTrue(os.path.exists(readme_path), msg='Readme doesn\'t exist')
def test_main_ymls(self):
need_main_ymls = set(self.expected_role_dirs) - set(['meta', 'tests', 'files', 'templates'])
for d in need_main_ymls:
main_yml = os.path.join(self.role_dir, d, 'main.yml')
self.assertTrue(os.path.exists(main_yml))
expected_string = "---\n# {0} file for {1}".format(d, self.role_name)
with open(main_yml, 'r') as f:
self.assertEqual(expected_string, f.read().strip())
def test_role_dirs(self):
for d in self.expected_role_dirs:
self.assertTrue(os.path.isdir(os.path.join(self.role_dir, d)), msg="Expected role subdirectory {0} doesn't exist".format(d))
def test_readme_contents(self):
with open(os.path.join(self.role_dir, 'README.md'), 'r') as readme:
contents = readme.read()
with open(os.path.join(self.role_skeleton_path, 'README.md'), 'r') as f:
expected_contents = f.read()
self.assertEqual(expected_contents, contents, msg='README.md does not match expected')
def test_test_yml(self):
with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f:
test_playbook = yaml.safe_load(f)
print(test_playbook)
self.assertEqual(len(test_playbook), 1)
self.assertEqual(test_playbook[0]['hosts'], 'localhost')
self.assertEqual(test_playbook[0]['remote_user'], 'root')
self.assertListEqual(test_playbook[0]['roles'], [self.role_name], msg='The list of roles included in the test play doesn\'t match')
class TestGalaxyInitDefault(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
cls.setUpRole(role_name='delete_me')
@classmethod
def tearDownClass(cls):
cls.tearDownRole()
def test_metadata_contents(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata')
class TestGalaxyInitAPB(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
cls.setUpRole('delete_me_apb', galaxy_args=['--type=apb'])
@classmethod
def tearDownClass(cls):
cls.tearDownRole()
def test_metadata_apb_tag(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertIn('apb', metadata.get('galaxy_info', dict()).get('galaxy_tags', []), msg='apb tag not set in role metadata')
def test_metadata_contents(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata')
def test_apb_yml(self):
self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'apb.yml')), msg='apb.yml was not created')
def test_test_yml(self):
with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f:
test_playbook = yaml.safe_load(f)
print(test_playbook)
self.assertEqual(len(test_playbook), 1)
self.assertEqual(test_playbook[0]['hosts'], 'localhost')
self.assertFalse(test_playbook[0]['gather_facts'])
self.assertEqual(test_playbook[0]['connection'], 'local')
self.assertIsNone(test_playbook[0]['tasks'], msg='We\'re expecting an unset list of tasks in test.yml')
class TestGalaxyInitContainer(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
cls.setUpRole('delete_me_container', galaxy_args=['--type=container'])
@classmethod
def tearDownClass(cls):
cls.tearDownRole()
def test_metadata_container_tag(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertIn('container', metadata.get('galaxy_info', dict()).get('galaxy_tags', []), msg='container tag not set in role metadata')
def test_metadata_contents(self):
with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf:
metadata = yaml.safe_load(mf)
self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata')
def test_meta_container_yml(self):
self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'meta', 'container.yml')), msg='container.yml was not created')
def test_test_yml(self):
with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f:
test_playbook = yaml.safe_load(f)
print(test_playbook)
self.assertEqual(len(test_playbook), 1)
self.assertEqual(test_playbook[0]['hosts'], 'localhost')
self.assertFalse(test_playbook[0]['gather_facts'])
self.assertEqual(test_playbook[0]['connection'], 'local')
self.assertIsNone(test_playbook[0]['tasks'], msg='We\'re expecting an unset list of tasks in test.yml')
class TestGalaxyInitSkeleton(unittest.TestCase, ValidRoleTests):
@classmethod
def setUpClass(cls):
role_skeleton_path = os.path.join(os.path.split(__file__)[0], 'test_data', 'role_skeleton')
cls.setUpRole('delete_me_skeleton', skeleton_path=role_skeleton_path, use_explicit_type=True)
@classmethod
def tearDownClass(cls):
cls.tearDownRole()
def test_empty_files_dir(self):
files_dir = os.path.join(self.role_dir, 'files')
self.assertTrue(os.path.isdir(files_dir))
self.assertListEqual(os.listdir(files_dir), [], msg='we expect the files directory to be empty, is ignore working?')
def test_template_ignore_jinja(self):
test_conf_j2 = os.path.join(self.role_dir, 'templates', 'test.conf.j2')
self.assertTrue(os.path.exists(test_conf_j2), msg="The test.conf.j2 template doesn't seem to exist, is it being rendered as test.conf?")
with open(test_conf_j2, 'r') as f:
contents = f.read()
expected_contents = '[defaults]\ntest_key = {{ test_variable }}'
self.assertEqual(expected_contents, contents.strip(), msg="test.conf.j2 doesn't contain what it should, is it being rendered?")
def test_template_ignore_jinja_subfolder(self):
test_conf_j2 = os.path.join(self.role_dir, 'templates', 'subfolder', 'test.conf.j2')
self.assertTrue(os.path.exists(test_conf_j2), msg="The test.conf.j2 template doesn't seem to exist, is it being rendered as test.conf?")
with open(test_conf_j2, 'r') as f:
contents = f.read()
expected_contents = '[defaults]\ntest_key = {{ test_variable }}'
self.assertEqual(expected_contents, contents.strip(), msg="test.conf.j2 doesn't contain what it should, is it being rendered?")
def test_template_ignore_similar_folder(self):
self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'templates_extra', 'templates.txt')))
def test_skeleton_option(self):
self.assertEqual(self.role_skeleton_path, context.CLIARGS['role_skeleton'], msg='Skeleton path was not parsed properly from the command line')
@pytest.mark.parametrize('cli_args, expected', [
(['ansible-galaxy', 'collection', 'init', 'abc._def'], 0),
(['ansible-galaxy', 'collection', 'init', 'abc._def', '-vvv'], 3),
(['ansible-galaxy', 'collection', 'init', 'abc._def', '-vv'], 2),
])
def test_verbosity_arguments(cli_args, expected, monkeypatch):
# Mock out the functions so we don't actually execute anything
for func_name in [f for f in dir(GalaxyCLI) if f.startswith("execute_")]:
monkeypatch.setattr(GalaxyCLI, func_name, MagicMock())
cli = GalaxyCLI(args=cli_args)
cli.run()
assert context.CLIARGS['verbosity'] == expected
@pytest.fixture()
def collection_skeleton(request, tmp_path_factory):
name, skeleton_path = request.param
galaxy_args = ['ansible-galaxy', 'collection', 'init', '-c']
if skeleton_path is not None:
galaxy_args += ['--collection-skeleton', skeleton_path]
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
galaxy_args += ['--init-path', test_dir, name]
GalaxyCLI(args=galaxy_args).run()
namespace_name, collection_name = name.split('.', 1)
collection_dir = os.path.join(test_dir, namespace_name, collection_name)
return collection_dir
@pytest.mark.parametrize('collection_skeleton', [
('ansible_test.my_collection', None),
], indirect=True)
def test_collection_default(collection_skeleton):
meta_path = os.path.join(collection_skeleton, 'galaxy.yml')
with open(meta_path, 'r') as galaxy_meta:
metadata = yaml.safe_load(galaxy_meta)
assert metadata['namespace'] == 'ansible_test'
assert metadata['name'] == 'my_collection'
assert metadata['authors'] == ['your name <[email protected]>']
assert metadata['readme'] == 'README.md'
assert metadata['version'] == '1.0.0'
assert metadata['description'] == 'your collection description'
assert metadata['license'] == ['GPL-2.0-or-later']
assert metadata['tags'] == []
assert metadata['dependencies'] == {}
assert metadata['documentation'] == 'http://docs.example.com'
assert metadata['repository'] == 'http://example.com/repository'
assert metadata['homepage'] == 'http://example.com'
assert metadata['issues'] == 'http://example.com/issue/tracker'
for d in ['docs', 'plugins', 'roles']:
assert os.path.isdir(os.path.join(collection_skeleton, d)), \
"Expected collection subdirectory {0} doesn't exist".format(d)
@pytest.mark.parametrize('collection_skeleton', [
('ansible_test.delete_me_skeleton', os.path.join(os.path.split(__file__)[0], 'test_data', 'collection_skeleton')),
], indirect=True)
def test_collection_skeleton(collection_skeleton):
meta_path = os.path.join(collection_skeleton, 'galaxy.yml')
with open(meta_path, 'r') as galaxy_meta:
metadata = yaml.safe_load(galaxy_meta)
assert metadata['namespace'] == 'ansible_test'
assert metadata['name'] == 'delete_me_skeleton'
assert metadata['authors'] == ['Ansible Cow <[email protected]>', 'Tu Cow <[email protected]>']
assert metadata['version'] == '0.1.0'
assert metadata['readme'] == 'README.md'
assert len(metadata) == 5
assert os.path.exists(os.path.join(collection_skeleton, 'README.md'))
# Test empty directories exist and are empty
for empty_dir in ['plugins/action', 'plugins/filter', 'plugins/inventory', 'plugins/lookup',
'plugins/module_utils', 'plugins/modules']:
assert os.listdir(os.path.join(collection_skeleton, empty_dir)) == []
# Test files that don't end with .j2 were not templated
doc_file = os.path.join(collection_skeleton, 'docs', 'My Collection.md')
with open(doc_file, 'r') as f:
doc_contents = f.read()
assert doc_contents.strip() == 'Welcome to my test collection doc for {{ namespace }}.'
# Test files that end with .j2 but are in the templates directory were not templated
for template_dir in ['playbooks/templates', 'playbooks/templates/subfolder',
'roles/common/templates', 'roles/common/templates/subfolder']:
test_conf_j2 = os.path.join(collection_skeleton, template_dir, 'test.conf.j2')
assert os.path.exists(test_conf_j2)
with open(test_conf_j2, 'r') as f:
contents = f.read()
expected_contents = '[defaults]\ntest_key = {{ test_variable }}'
assert expected_contents == contents.strip()
@pytest.fixture()
def collection_artifact(collection_skeleton, tmp_path_factory):
''' Creates a collection artifact tarball that is ready to be published and installed '''
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output'))
# Create a file with +x in the collection so we can test the permissions
execute_path = os.path.join(collection_skeleton, 'runme.sh')
with open(execute_path, mode='wb') as fd:
fd.write(b"echo hi")
# S_ISUID should not be present on extraction.
os.chmod(execute_path, os.stat(execute_path).st_mode | stat.S_ISUID | stat.S_IEXEC)
# Because we call GalaxyCLI in collection_skeleton we need to reset the singleton back to None so it uses the new
# args, we reset the original args once it is done.
orig_cli_args = co.GlobalCLIArgs._Singleton__instance
try:
co.GlobalCLIArgs._Singleton__instance = None
galaxy_args = ['ansible-galaxy', 'collection', 'build', collection_skeleton, '--output-path', output_dir]
gc = GalaxyCLI(args=galaxy_args)
gc.run()
yield output_dir
finally:
co.GlobalCLIArgs._Singleton__instance = orig_cli_args
def test_invalid_skeleton_path():
expected = "- the skeleton path '/fake/path' does not exist, cannot init collection"
gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'init', 'my.collection', '--collection-skeleton',
'/fake/path'])
with pytest.raises(AnsibleError, match=expected):
gc.run()
@pytest.mark.parametrize("name", [
"",
"invalid",
"hypen-ns.collection",
"ns.hyphen-collection",
"ns.collection.weird",
])
def test_invalid_collection_name_init(name):
expected = "Invalid collection name '%s', name must be in the format <namespace>.<collection>" % name
gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'init', name])
with pytest.raises(AnsibleError, match=expected):
gc.run()
@pytest.mark.parametrize("name, expected", [
("", ""),
("invalid", "invalid"),
("invalid:1.0.0", "invalid"),
("hypen-ns.collection", "hypen-ns.collection"),
("ns.hyphen-collection", "ns.hyphen-collection"),
("ns.collection.weird", "ns.collection.weird"),
])
def test_invalid_collection_name_install(name, expected, tmp_path_factory):
install_path = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
# FIXME: we should add the collection name in the error message
# Used to be: expected = "Invalid collection name '%s', name must be in the format <namespace>.<collection>" % expected
expected = "Neither the collection requirement entry key 'name', nor 'source' point to a concrete resolvable collection artifact. "
expected += r"Also 'name' is not an FQCN\. A valid collection name must be in the format <namespace>\.<collection>\. "
expected += r"Please make sure that the namespace and the collection name contain characters from \[a\-zA\-Z0\-9_\] only\."
gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', name, '-p', os.path.join(install_path, 'install')])
with pytest.raises(AnsibleError, match=expected):
gc.run()
@pytest.mark.parametrize('collection_skeleton', [
('ansible_test.build_collection', None),
], indirect=True)
def test_collection_build(collection_artifact):
tar_path = os.path.join(collection_artifact, 'ansible_test-build_collection-1.0.0.tar.gz')
assert tarfile.is_tarfile(tar_path)
with tarfile.open(tar_path, mode='r') as tar:
tar_members = tar.getmembers()
valid_files = ['MANIFEST.json', 'FILES.json', 'roles', 'docs', 'plugins', 'plugins/README.md', 'README.md',
'runme.sh', 'meta', 'meta/runtime.yml']
assert len(tar_members) == len(valid_files)
# Verify the uid and gid is 0 and the correct perms are set
for member in tar_members:
assert member.name in valid_files
assert member.gid == 0
assert member.gname == ''
assert member.uid == 0
assert member.uname == ''
if member.isdir() or member.name == 'runme.sh':
assert member.mode == 0o0755
else:
assert member.mode == 0o0644
manifest_file = tar.extractfile(tar_members[0])
try:
manifest = json.loads(to_text(manifest_file.read()))
finally:
manifest_file.close()
coll_info = manifest['collection_info']
file_manifest = manifest['file_manifest_file']
assert manifest['format'] == 1
assert len(manifest.keys()) == 3
assert coll_info['namespace'] == 'ansible_test'
assert coll_info['name'] == 'build_collection'
assert coll_info['version'] == '1.0.0'
assert coll_info['authors'] == ['your name <[email protected]>']
assert coll_info['readme'] == 'README.md'
assert coll_info['tags'] == []
assert coll_info['description'] == 'your collection description'
assert coll_info['license'] == ['GPL-2.0-or-later']
assert coll_info['license_file'] is None
assert coll_info['dependencies'] == {}
assert coll_info['repository'] == 'http://example.com/repository'
assert coll_info['documentation'] == 'http://docs.example.com'
assert coll_info['homepage'] == 'http://example.com'
assert coll_info['issues'] == 'http://example.com/issue/tracker'
assert len(coll_info.keys()) == 14
assert file_manifest['name'] == 'FILES.json'
assert file_manifest['ftype'] == 'file'
assert file_manifest['chksum_type'] == 'sha256'
assert file_manifest['chksum_sha256'] is not None # Order of keys makes it hard to verify the checksum
assert file_manifest['format'] == 1
assert len(file_manifest.keys()) == 5
files_file = tar.extractfile(tar_members[1])
try:
files = json.loads(to_text(files_file.read()))
finally:
files_file.close()
assert len(files['files']) == 9
assert files['format'] == 1
assert len(files.keys()) == 2
valid_files_entries = ['.', 'roles', 'docs', 'plugins', 'plugins/README.md', 'README.md', 'runme.sh', 'meta', 'meta/runtime.yml']
for file_entry in files['files']:
assert file_entry['name'] in valid_files_entries
assert file_entry['format'] == 1
if file_entry['name'] in ['plugins/README.md', 'runme.sh', 'meta/runtime.yml']:
assert file_entry['ftype'] == 'file'
assert file_entry['chksum_type'] == 'sha256'
# Can't test the actual checksum as the html link changes based on the version or the file contents
# don't matter
assert file_entry['chksum_sha256'] is not None
elif file_entry['name'] == 'README.md':
assert file_entry['ftype'] == 'file'
assert file_entry['chksum_type'] == 'sha256'
assert file_entry['chksum_sha256'] == '6d8b5f9b5d53d346a8cd7638a0ec26e75e8d9773d952162779a49d25da6ef4f5'
else:
assert file_entry['ftype'] == 'dir'
assert file_entry['chksum_type'] is None
assert file_entry['chksum_sha256'] is None
assert len(file_entry.keys()) == 5
@pytest.fixture()
def collection_install(reset_cli_args, tmp_path_factory, monkeypatch):
mock_install = MagicMock()
monkeypatch.setattr(ansible.cli.galaxy, 'install_collections', mock_install)
mock_warning = MagicMock()
monkeypatch.setattr(ansible.utils.display.Display, 'warning', mock_warning)
output_dir = to_text((tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output')))
yield mock_install, mock_warning, output_dir
def test_collection_install_with_names(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1',
'--collections-path', output_dir]
GalaxyCLI(args=galaxy_args).run()
collection_path = os.path.join(output_dir, 'ansible_collections')
assert os.path.isdir(collection_path)
assert mock_warning.call_count == 1
assert "The specified collections path '%s' is not part of the configured Ansible collections path" % output_dir \
in mock_warning.call_args[0][0]
assert mock_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_install.call_args[0][0]]
assert requirements == [('namespace.collection', '*', None, 'galaxy'),
('namespace2.collection', '1.0.1', None, 'galaxy')]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
def test_collection_install_with_requirements_file(collection_install):
mock_install, mock_warning, output_dir = collection_install
requirements_file = os.path.join(output_dir, 'requirements.yml')
with open(requirements_file, 'wb') as req_obj:
req_obj.write(b'''---
collections:
- namespace.coll
- name: namespace2.coll
version: '>2.0.1'
''')
galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
'--collections-path', output_dir]
GalaxyCLI(args=galaxy_args).run()
collection_path = os.path.join(output_dir, 'ansible_collections')
assert os.path.isdir(collection_path)
assert mock_warning.call_count == 1
assert "The specified collections path '%s' is not part of the configured Ansible collections path" % output_dir \
in mock_warning.call_args[0][0]
assert mock_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_install.call_args[0][0]]
assert requirements == [('namespace.coll', '*', None, 'galaxy'),
('namespace2.coll', '>2.0.1', None, 'galaxy')]
assert mock_install.call_args[0][1] == collection_path
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
def test_collection_install_with_relative_path(collection_install, monkeypatch):
mock_install = collection_install[0]
mock_req = MagicMock()
mock_req.return_value = {'collections': [('namespace.coll', '*', None, None)], 'roles': []}
monkeypatch.setattr(ansible.cli.galaxy.GalaxyCLI, '_parse_requirements_file', mock_req)
monkeypatch.setattr(os, 'makedirs', MagicMock())
requirements_file = './requirements.myl'
collections_path = './ansible_collections'
galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
'--collections-path', collections_path]
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.coll', '*', None, None)]
assert mock_install.call_args[0][1] == os.path.abspath(collections_path)
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
assert mock_req.call_count == 1
assert mock_req.call_args[0][0] == os.path.abspath(requirements_file)
def test_collection_install_with_unexpanded_path(collection_install, monkeypatch):
mock_install = collection_install[0]
mock_req = MagicMock()
mock_req.return_value = {'collections': [('namespace.coll', '*', None, None)], 'roles': []}
monkeypatch.setattr(ansible.cli.galaxy.GalaxyCLI, '_parse_requirements_file', mock_req)
monkeypatch.setattr(os, 'makedirs', MagicMock())
requirements_file = '~/requirements.myl'
collections_path = '~/ansible_collections'
galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
'--collections-path', collections_path]
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_count == 1
assert mock_install.call_args[0][0] == [('namespace.coll', '*', None, None)]
assert mock_install.call_args[0][1] == os.path.expanduser(os.path.expandvars(collections_path))
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
assert mock_req.call_count == 1
assert mock_req.call_args[0][0] == os.path.expanduser(os.path.expandvars(requirements_file))
def test_collection_install_in_collection_dir(collection_install, monkeypatch):
mock_install, mock_warning, output_dir = collection_install
collections_path = C.COLLECTIONS_PATHS[0]
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1',
'--collections-path', collections_path]
GalaxyCLI(args=galaxy_args).run()
assert mock_warning.call_count == 0
assert mock_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_install.call_args[0][0]]
assert requirements == [('namespace.collection', '*', None, 'galaxy'),
('namespace2.collection', '1.0.1', None, 'galaxy')]
assert mock_install.call_args[0][1] == os.path.join(collections_path, 'ansible_collections')
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
def test_collection_install_with_url(monkeypatch, collection_install):
mock_install, dummy, output_dir = collection_install
mock_open = MagicMock(return_value=BytesIO())
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
mock_metadata = MagicMock(return_value={'namespace': 'foo', 'name': 'bar', 'version': 'v1.0.0'})
monkeypatch.setattr(collection.concrete_artifact_manager, '_get_meta_from_tar', mock_metadata)
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'https://foo/bar/foo-bar-v1.0.0.tar.gz',
'--collections-path', output_dir]
GalaxyCLI(args=galaxy_args).run()
collection_path = os.path.join(output_dir, 'ansible_collections')
assert os.path.isdir(collection_path)
assert mock_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_install.call_args[0][0]]
assert requirements == [('foo.bar', 'v1.0.0', 'https://foo/bar/foo-bar-v1.0.0.tar.gz', 'url')]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
def test_collection_install_name_and_requirements_fail(collection_install):
test_path = collection_install[2]
expected = 'The positional collection_name arg and --requirements-file are mutually exclusive.'
with pytest.raises(AnsibleError, match=expected):
GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path',
test_path, '--requirements-file', test_path]).run()
def test_collection_install_no_name_and_requirements_fail(collection_install):
test_path = collection_install[2]
expected = 'You must specify a collection name or a requirements file.'
with pytest.raises(AnsibleError, match=expected):
GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', '--collections-path', test_path]).run()
def test_collection_install_path_with_ansible_collections(collection_install):
mock_install, mock_warning, output_dir = collection_install
collection_path = os.path.join(output_dir, 'ansible_collections')
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1',
'--collections-path', collection_path]
GalaxyCLI(args=galaxy_args).run()
assert os.path.isdir(collection_path)
assert mock_warning.call_count == 1
assert "The specified collections path '%s' is not part of the configured Ansible collections path" \
% collection_path in mock_warning.call_args[0][0]
assert mock_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_install.call_args[0][0]]
assert requirements == [('namespace.collection', '*', None, 'galaxy'),
('namespace2.collection', '1.0.1', None, 'galaxy')]
assert mock_install.call_args[0][1] == collection_path
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
assert mock_install.call_args[0][3] is False # ignore_errors
assert mock_install.call_args[0][4] is False # no_deps
assert mock_install.call_args[0][5] is False # force
assert mock_install.call_args[0][6] is False # force_deps
def test_collection_install_ignore_certs(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--ignore-certs']
GalaxyCLI(args=galaxy_args).run()
assert mock_install.call_args[0][3] is False
def test_collection_install_force(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--force']
GalaxyCLI(args=galaxy_args).run()
# mock_install args: collections, output_path, apis, ignore_errors, no_deps, force, force_deps
assert mock_install.call_args[0][5] is True
def test_collection_install_force_deps(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--force-with-deps']
GalaxyCLI(args=galaxy_args).run()
# mock_install args: collections, output_path, apis, ignore_errors, no_deps, force, force_deps
assert mock_install.call_args[0][6] is True
def test_collection_install_no_deps(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--no-deps']
GalaxyCLI(args=galaxy_args).run()
# mock_install args: collections, output_path, apis, ignore_errors, no_deps, force, force_deps
assert mock_install.call_args[0][4] is True
def test_collection_install_ignore(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--ignore-errors']
GalaxyCLI(args=galaxy_args).run()
# mock_install args: collections, output_path, apis, ignore_errors, no_deps, force, force_deps
assert mock_install.call_args[0][3] is True
def test_collection_install_custom_server(collection_install):
mock_install, mock_warning, output_dir = collection_install
galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir,
'--server', 'https://galaxy-dev.ansible.com']
GalaxyCLI(args=galaxy_args).run()
assert len(mock_install.call_args[0][2]) == 1
assert mock_install.call_args[0][2][0].api_server == 'https://galaxy-dev.ansible.com'
assert mock_install.call_args[0][2][0].validate_certs is True
@pytest.fixture()
def requirements_file(request, tmp_path_factory):
content = request.param
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Requirements'))
requirements_file = os.path.join(test_dir, 'requirements.yml')
if content:
with open(requirements_file, 'wb') as req_obj:
req_obj.write(to_bytes(content))
yield requirements_file
@pytest.fixture()
def requirements_cli(monkeypatch):
monkeypatch.setattr(GalaxyCLI, 'execute_install', MagicMock())
cli = GalaxyCLI(args=['ansible-galaxy', 'install'])
cli.run()
return cli
@pytest.mark.parametrize('requirements_file', [None], indirect=True)
def test_parse_requirements_file_that_doesnt_exist(requirements_cli, requirements_file):
expected = "The requirements file '%s' does not exist." % to_native(requirements_file)
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', ['not a valid yml file: hi: world'], indirect=True)
def test_parse_requirements_file_that_isnt_yaml(requirements_cli, requirements_file):
expected = "Failed to parse the requirements yml at '%s' with the following error" % to_native(requirements_file)
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', [('''
# Older role based requirements.yml
- galaxy.role
- anotherrole
''')], indirect=True)
def test_parse_requirements_in_older_format_illega(requirements_cli, requirements_file):
expected = "Expecting requirements file to be a dict with the key 'collections' that contains a list of " \
"collections to install"
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file, allow_old_format=False)
@pytest.mark.parametrize('requirements_file', ['''
collections:
- version: 1.0.0
'''], indirect=True)
def test_parse_requirements_without_mandatory_name_key(requirements_cli, requirements_file):
# Used to be "Collections requirement entry should contain the key name."
# Should we check that either source or name is provided before using the dep resolver?
expected = "Neither the collection requirement entry key 'name', nor 'source' point to a concrete resolvable collection artifact. "
expected += r"Also 'name' is not an FQCN\. A valid collection name must be in the format <namespace>\.<collection>\. "
expected += r"Please make sure that the namespace and the collection name contain characters from \[a\-zA\-Z0\-9_\] only\."
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', [('''
collections:
- namespace.collection1
- namespace.collection2
'''), ('''
collections:
- name: namespace.collection1
- name: namespace.collection2
''')], indirect=True)
def test_parse_requirements(requirements_cli, requirements_file):
expected = {
'roles': [],
'collections': [('namespace.collection1', '*', None, 'galaxy'), ('namespace.collection2', '*', None, 'galaxy')]
}
actual = requirements_cli._parse_requirements_file(requirements_file)
actual['collections'] = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in actual.get('collections', [])]
assert actual == expected
@pytest.mark.parametrize('requirements_file', ['''
collections:
- name: namespace.collection1
version: ">=1.0.0,<=2.0.0"
source: https://galaxy-dev.ansible.com
- namespace.collection2'''], indirect=True)
def test_parse_requirements_with_extra_info(requirements_cli, requirements_file):
actual = requirements_cli._parse_requirements_file(requirements_file)
actual['collections'] = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in actual.get('collections', [])]
assert len(actual['roles']) == 0
assert len(actual['collections']) == 2
assert actual['collections'][0][0] == 'namespace.collection1'
assert actual['collections'][0][1] == '>=1.0.0,<=2.0.0'
assert actual['collections'][0][2].api_server == 'https://galaxy-dev.ansible.com'
assert actual['collections'][1] == ('namespace.collection2', '*', None, 'galaxy')
@pytest.mark.parametrize('requirements_file', ['''
roles:
- username.role_name
- src: username2.role_name2
- src: ssh://github.com/user/repo
scm: git
collections:
- namespace.collection2
'''], indirect=True)
def test_parse_requirements_with_roles_and_collections(requirements_cli, requirements_file):
actual = requirements_cli._parse_requirements_file(requirements_file)
actual['collections'] = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in actual.get('collections', [])]
assert len(actual['roles']) == 3
assert actual['roles'][0].name == 'username.role_name'
assert actual['roles'][1].name == 'username2.role_name2'
assert actual['roles'][2].name == 'repo'
assert actual['roles'][2].src == 'ssh://github.com/user/repo'
assert len(actual['collections']) == 1
assert actual['collections'][0] == ('namespace.collection2', '*', None, 'galaxy')
@pytest.mark.parametrize('requirements_file', ['''
collections:
- name: namespace.collection
- name: namespace2.collection2
source: https://galaxy-dev.ansible.com/
- name: namespace3.collection3
source: server
'''], indirect=True)
def test_parse_requirements_with_collection_source(requirements_cli, requirements_file):
galaxy_api = GalaxyAPI(requirements_cli.api, 'server', 'https://config-server')
requirements_cli.api_servers.append(galaxy_api)
actual = requirements_cli._parse_requirements_file(requirements_file)
actual['collections'] = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in actual.get('collections', [])]
assert actual['roles'] == []
assert len(actual['collections']) == 3
assert actual['collections'][0] == ('namespace.collection', '*', None, 'galaxy')
assert actual['collections'][1][0] == 'namespace2.collection2'
assert actual['collections'][1][1] == '*'
assert actual['collections'][1][2].api_server == 'https://galaxy-dev.ansible.com/'
assert actual['collections'][2][0] == 'namespace3.collection3'
assert actual['collections'][2][1] == '*'
assert actual['collections'][2][2].api_server == 'https://config-server'
@pytest.mark.parametrize('requirements_file', ['''
- username.included_role
- src: https://github.com/user/repo
'''], indirect=True)
def test_parse_requirements_roles_with_include(requirements_cli, requirements_file):
reqs = [
'ansible.role',
{'include': requirements_file},
]
parent_requirements = os.path.join(os.path.dirname(requirements_file), 'parent.yaml')
with open(to_bytes(parent_requirements), 'wb') as req_fd:
req_fd.write(to_bytes(yaml.safe_dump(reqs)))
actual = requirements_cli._parse_requirements_file(parent_requirements)
assert len(actual['roles']) == 3
assert actual['collections'] == []
assert actual['roles'][0].name == 'ansible.role'
assert actual['roles'][1].name == 'username.included_role'
assert actual['roles'][2].name == 'repo'
assert actual['roles'][2].src == 'https://github.com/user/repo'
@pytest.mark.parametrize('requirements_file', ['''
- username.role
- include: missing.yml
'''], indirect=True)
def test_parse_requirements_roles_with_include_missing(requirements_cli, requirements_file):
expected = "Failed to find include requirements file 'missing.yml' in '%s'" % to_native(requirements_file)
with pytest.raises(AnsibleError, match=expected):
requirements_cli._parse_requirements_file(requirements_file)
@pytest.mark.parametrize('requirements_file', ['''
collections:
- namespace.name
roles:
- namespace.name
'''], indirect=True)
def test_install_implicit_role_with_collections(requirements_file, monkeypatch):
mock_collection_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_collection', mock_collection_install)
mock_role_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_role', mock_role_install)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
cli = GalaxyCLI(args=['ansible-galaxy', 'install', '-r', requirements_file])
cli.run()
assert mock_collection_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_collection_install.call_args[0][0]]
assert requirements == [('namespace.name', '*', None, 'galaxy')]
assert mock_collection_install.call_args[0][1] == cli._get_default_collection_path()
assert mock_role_install.call_count == 1
assert len(mock_role_install.call_args[0][0]) == 1
assert str(mock_role_install.call_args[0][0][0]) == 'namespace.name'
assert not any(list('contains collections which will be ignored' in mock_call[1][0] for mock_call in mock_display.mock_calls))
@pytest.mark.parametrize('requirements_file', ['''
collections:
- namespace.name
roles:
- namespace.name
'''], indirect=True)
def test_install_explicit_role_with_collections(requirements_file, monkeypatch):
mock_collection_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_collection', mock_collection_install)
mock_role_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_role', mock_role_install)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
cli = GalaxyCLI(args=['ansible-galaxy', 'role', 'install', '-r', requirements_file])
cli.run()
assert mock_collection_install.call_count == 0
assert mock_role_install.call_count == 1
assert len(mock_role_install.call_args[0][0]) == 1
assert str(mock_role_install.call_args[0][0][0]) == 'namespace.name'
assert any(list('contains collections which will be ignored' in mock_call[1][0] for mock_call in mock_display.mock_calls))
@pytest.mark.parametrize('requirements_file', ['''
collections:
- namespace.name
roles:
- namespace.name
'''], indirect=True)
def test_install_role_with_collections_and_path(requirements_file, monkeypatch):
mock_collection_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_collection', mock_collection_install)
mock_role_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_role', mock_role_install)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_display)
cli = GalaxyCLI(args=['ansible-galaxy', 'install', '-p', 'path', '-r', requirements_file])
cli.run()
assert mock_collection_install.call_count == 0
assert mock_role_install.call_count == 1
assert len(mock_role_install.call_args[0][0]) == 1
assert str(mock_role_install.call_args[0][0][0]) == 'namespace.name'
assert any(list('contains collections which will be ignored' in mock_call[1][0] for mock_call in mock_display.mock_calls))
@pytest.mark.parametrize('requirements_file', ['''
collections:
- namespace.name
roles:
- namespace.name
'''], indirect=True)
def test_install_collection_with_roles(requirements_file, monkeypatch):
mock_collection_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_collection', mock_collection_install)
mock_role_install = MagicMock()
monkeypatch.setattr(GalaxyCLI, '_execute_install_role', mock_role_install)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', '-r', requirements_file])
cli.run()
assert mock_collection_install.call_count == 1
requirements = [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type,) for r in mock_collection_install.call_args[0][0]]
assert requirements == [('namespace.name', '*', None, 'galaxy')]
assert mock_role_install.call_count == 0
assert any(list('contains roles which will be ignored' in mock_call[1][0] for mock_call in mock_display.mock_calls))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,977 |
Failed test when the path in cgroups informations have ":"
|
### Summary
Get `ValueError: too many values to unpack (expected 3)` in line `cid, subsystem, path = value.split(':')` with ansible test
at https://github.com/ansible/ansible/blob/5812cabaf53a7c972c73a4e45faa57032e5c1186/test/lib/ansible_test/_internal/cgroup.py#L47
data come from https://github.com/ansible/ansible/blob/5812cabaf53a7c972c73a4e45faa57032e5c1186/test/lib/ansible_test/_internal/docker_util.py#L303
Is it possible to change the line to have something like :
```
cid, subsystem, path = value.split(':', maxsplit=2)
```
(edited after a better analysis of the problem)
### Issue Type
Bug Report
### Component Name
cgroup.py
### Ansible Version
```console
ansible-8.5.0
ansible-core-2.15.5
```
### Configuration
```console
problem and solution already in Summary
```
### OS / Environment
cluster K8S/CRI-O with Screwdriver
### Steps to Reproduce
problem and solution already in Summary
### Expected Results
problem and solution already in Summary
### Actual Results
```console
problem and solution already in Summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81977
|
https://github.com/ansible/ansible/pull/82040
|
09d943445c49c119e90787a5d28703c0d70a9271
|
e933d9d8a6155478ce99518d111220e680201ca2
| 2023-10-15T14:08:23Z |
python
| 2023-10-19T22:30:32Z |
changelogs/fragments/ansible-test-cgroup-split.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,977 |
Failed test when the path in cgroups informations have ":"
|
### Summary
Get `ValueError: too many values to unpack (expected 3)` in line `cid, subsystem, path = value.split(':')` with ansible test
at https://github.com/ansible/ansible/blob/5812cabaf53a7c972c73a4e45faa57032e5c1186/test/lib/ansible_test/_internal/cgroup.py#L47
data come from https://github.com/ansible/ansible/blob/5812cabaf53a7c972c73a4e45faa57032e5c1186/test/lib/ansible_test/_internal/docker_util.py#L303
Is it possible to change the line to have something like :
```
cid, subsystem, path = value.split(':', maxsplit=2)
```
(edited after a better analysis of the problem)
### Issue Type
Bug Report
### Component Name
cgroup.py
### Ansible Version
```console
ansible-8.5.0
ansible-core-2.15.5
```
### Configuration
```console
problem and solution already in Summary
```
### OS / Environment
cluster K8S/CRI-O with Screwdriver
### Steps to Reproduce
problem and solution already in Summary
### Expected Results
problem and solution already in Summary
### Actual Results
```console
problem and solution already in Summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81977
|
https://github.com/ansible/ansible/pull/82040
|
09d943445c49c119e90787a5d28703c0d70a9271
|
e933d9d8a6155478ce99518d111220e680201ca2
| 2023-10-15T14:08:23Z |
python
| 2023-10-19T22:30:32Z |
test/lib/ansible_test/_internal/cgroup.py
|
"""Linux control group constants, classes and utilities."""
from __future__ import annotations
import codecs
import dataclasses
import pathlib
import re
class CGroupPath:
"""Linux cgroup path constants."""
ROOT = '/sys/fs/cgroup'
SYSTEMD = '/sys/fs/cgroup/systemd'
SYSTEMD_RELEASE_AGENT = '/sys/fs/cgroup/systemd/release_agent'
class MountType:
"""Linux filesystem mount type constants."""
TMPFS = 'tmpfs'
CGROUP_V1 = 'cgroup'
CGROUP_V2 = 'cgroup2'
@dataclasses.dataclass(frozen=True)
class CGroupEntry:
"""A single cgroup entry parsed from '/proc/{pid}/cgroup' in the proc filesystem."""
id: int
subsystem: str
path: pathlib.PurePosixPath
@property
def root_path(self) -> pathlib.PurePosixPath:
"""The root path for this cgroup subsystem."""
return pathlib.PurePosixPath(CGroupPath.ROOT, self.subsystem)
@property
def full_path(self) -> pathlib.PurePosixPath:
"""The full path for this cgroup subsystem."""
return pathlib.PurePosixPath(self.root_path, str(self.path).lstrip('/'))
@classmethod
def parse(cls, value: str) -> CGroupEntry:
"""Parse the given cgroup line from the proc filesystem and return a cgroup entry."""
cid, subsystem, path = value.split(':')
return cls(
id=int(cid),
subsystem=subsystem.removeprefix('name='),
path=pathlib.PurePosixPath(path),
)
@classmethod
def loads(cls, value: str) -> tuple[CGroupEntry, ...]:
"""Parse the given output from the proc filesystem and return a tuple of cgroup entries."""
return tuple(cls.parse(line) for line in value.splitlines())
@dataclasses.dataclass(frozen=True)
class MountEntry:
"""A single mount info entry parsed from '/proc/{pid}/mountinfo' in the proc filesystem."""
mount_id: int
parent_id: int
device_major: int
device_minor: int
root: pathlib.PurePosixPath
path: pathlib.PurePosixPath
options: tuple[str, ...]
fields: tuple[str, ...]
type: str
source: pathlib.PurePosixPath
super_options: tuple[str, ...]
@classmethod
def parse(cls, value: str) -> MountEntry:
"""Parse the given mount info line from the proc filesystem and return a mount entry."""
# See: https://man7.org/linux/man-pages/man5/proc.5.html
# See: https://github.com/torvalds/linux/blob/aea23e7c464bfdec04b52cf61edb62030e9e0d0a/fs/proc_namespace.c#L135
mount_id, parent_id, device_major_minor, root, path, options, *remainder = value.split(' ')
fields = remainder[:-4]
separator, mtype, source, super_options = remainder[-4:]
assert separator == '-'
device_major, device_minor = device_major_minor.split(':')
return cls(
mount_id=int(mount_id),
parent_id=int(parent_id),
device_major=int(device_major),
device_minor=int(device_minor),
root=_decode_path(root),
path=_decode_path(path),
options=tuple(options.split(',')),
fields=tuple(fields),
type=mtype,
source=_decode_path(source),
super_options=tuple(super_options.split(',')),
)
@classmethod
def loads(cls, value: str) -> tuple[MountEntry, ...]:
"""Parse the given output from the proc filesystem and return a tuple of mount info entries."""
return tuple(cls.parse(line) for line in value.splitlines())
def _decode_path(value: str) -> pathlib.PurePosixPath:
"""Decode and return a path which may contain octal escape sequences."""
# See: https://github.com/torvalds/linux/blob/aea23e7c464bfdec04b52cf61edb62030e9e0d0a/fs/proc_namespace.c#L150
path = re.sub(r'(\\[0-7]{3})', lambda m: codecs.decode(m.group(0).encode('ascii'), 'unicode_escape'), value)
return pathlib.PurePosixPath(path)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
changelogs/fragments/j2_load_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
import glob
import os
import os.path
import pkgutil
import sys
import warnings
from collections import defaultdict, namedtuple
from importlib import import_module
from traceback import format_exc
import ansible.module_utils.compat.typing as t
from .filter import AnsibleJinja2Filter
from .test import AnsibleJinja2Test
from ansible import __version__ as ansible_version
from ansible import constants as C
from ansible.errors import AnsibleError, AnsiblePluginCircularRedirect, AnsiblePluginRemovedError, AnsibleCollectionUnsupportedVersionError
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder, _get_collection_metadata
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments
# TODO: take the packaging dep, or vendor SpecifierSet?
try:
from packaging.specifiers import SpecifierSet
from packaging.version import Version
except ImportError:
SpecifierSet = None # type: ignore[misc]
Version = None # type: ignore[misc]
import importlib.util
_PLUGIN_FILTERS = defaultdict(frozenset) # type: t.DefaultDict[str, frozenset]
display = Display()
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = os.path.expanduser(to_bytes(path, errors='surrogate_or_strict'))
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginPathContext(object):
def __init__(self, path, internal):
self.path = path
self.internal = internal
class PluginLoadContext(object):
def __init__(self):
self.original_name = None
self.redirect_list = []
self.error_list = []
self.import_error_list = []
self.load_attempts = []
self.pending_redirect = None
self.exit_reason = None
self.plugin_resolved_path = None
self.plugin_resolved_name = None
self.plugin_resolved_collection = None # empty string for resolved plugins from user-supplied paths
self.deprecated = False
self.removal_date = None
self.removal_version = None
self.deprecation_warnings = []
self.resolved = False
self._resolved_fqcn = None
self.action_plugin = None
@property
def resolved_fqcn(self):
if not self.resolved:
return
if not self._resolved_fqcn:
final_plugin = self.redirect_list[-1]
if AnsibleCollectionRef.is_valid_fqcr(final_plugin) and final_plugin.startswith('ansible.legacy.'):
final_plugin = final_plugin.split('ansible.legacy.')[-1]
if self.plugin_resolved_collection and not AnsibleCollectionRef.is_valid_fqcr(final_plugin):
final_plugin = self.plugin_resolved_collection + '.' + final_plugin
self._resolved_fqcn = final_plugin
return self._resolved_fqcn
def record_deprecation(self, name, deprecation, collection_name):
if not deprecation:
return self
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
warning_text = deprecation.get('warning_text', None) or ''
removal_date = deprecation.get('removal_date', None)
removal_version = deprecation.get('removal_version', None)
# If both removal_date and removal_version are specified, use removal_date
if removal_date is not None:
removal_version = None
warning_text = '{0} has been deprecated.{1}{2}'.format(name, ' ' if warning_text else '', warning_text)
display.deprecated(warning_text, date=removal_date, version=removal_version, collection_name=collection_name)
self.deprecated = True
if removal_date:
self.removal_date = removal_date
if removal_version:
self.removal_version = removal_version
self.deprecation_warnings.append(warning_text)
return self
def resolve(self, resolved_name, resolved_path, resolved_collection, exit_reason, action_plugin):
self.pending_redirect = None
self.plugin_resolved_name = resolved_name
self.plugin_resolved_path = resolved_path
self.plugin_resolved_collection = resolved_collection
self.exit_reason = exit_reason
self.resolved = True
self.action_plugin = action_plugin
return self
def redirect(self, redirect_name):
self.pending_redirect = redirect_name
self.exit_reason = 'pending redirect resolution from {0} to {1}'.format(self.original_name, redirect_name)
self.resolved = False
return self
def nope(self, exit_reason):
self.pending_redirect = None
self.exit_reason = exit_reason
self.resolved = False
return self
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
try:
self._plugin_instance_cache = {} if self.type == 'vars' else None
except ValueError:
self._plugin_instance_cache = None
self._searched_paths = set()
@property
def type(self):
return AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
def __repr__(self):
return 'PluginLoader(type={0})'.format(self.type)
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._plugin_instance_cache = {} if self.type == 'vars' else None
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = to_text(os.path.dirname(m.__file__), errors='surrogate_or_strict')
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths_with_context(self, subdirs=True):
''' Return a list of PluginPathContext objects to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = [PluginPathContext(p, False) for p in self._extra_dirs]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.abspath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
c = to_text(c, errors='surrogate_or_strict')
if os.path.isdir(c) and c not in ret:
ret.append(PluginPathContext(c, False))
path = to_text(path, errors='surrogate_or_strict')
if path not in ret:
ret.append(PluginPathContext(path, False))
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend([PluginPathContext(p, True) for p in self._get_package_paths(subdirs=subdirs)])
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
#
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
ret.sort(key=lambda p: p.path.endswith('/windows'))
# cache and return the result
self._paths = ret
return ret
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
paths_with_context = self._get_paths_with_context(subdirs=subdirs)
return [path_with_context.path for path_with_context in paths_with_context]
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS and not C.config.has_configuration_definition(type_name, name):
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
# TODO: allow configurable plugins to use sidecar
# if not dstring:
# filename, cn = find_plugin_docfile( name, type_name, self, [os.path.dirname(path)], C.YAML_DOC_EXTENSIONS)
# # TODO: dstring = AnsibleLoader(, file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader, is_module=(type_name == 'module'))
if 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _query_collection_routing_meta(self, acr, plugin_type, extension=None):
collection_pkg = import_module(acr.n_python_collection_package_name)
if not collection_pkg:
return None
# FIXME: shouldn't need this...
try:
# force any type-specific metadata postprocessing to occur
import_module(acr.n_python_collection_package_name + '.plugins.{0}'.format(plugin_type))
except ImportError:
pass
# this will be created by the collection PEP302 loader
collection_meta = getattr(collection_pkg, '_collection_meta', None)
if not collection_meta:
return None
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
if acr.subdirs:
subdir_qualified_resource = '.'.join([acr.subdirs, acr.resource])
else:
subdir_qualified_resource = acr.resource
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource + extension, None)
if not entry:
# try for extension-agnostic entry
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource, None)
return entry
def _find_fq_plugin(self, fq_name, extension, plugin_load_context, ignore_deprecated=False):
"""Search builtin paths to find a plugin. No external paths are searched,
meaning plugins inside roles inside collections will be ignored.
"""
plugin_load_context.resolved = False
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
# check collection metadata to see if any special handling is required for this plugin
routing_metadata = self._query_collection_routing_meta(acr, plugin_type, extension=extension)
action_plugin = None
# TODO: factor this into a wrapper method
if routing_metadata:
deprecation = routing_metadata.get('deprecation', None)
# this will no-op if there's no deprecation metadata for this plugin
if not ignore_deprecated:
plugin_load_context.record_deprecation(fq_name, deprecation, acr.collection)
tombstone = routing_metadata.get('tombstone', None)
# FIXME: clean up text gen
if tombstone:
removal_date = tombstone.get('removal_date')
removal_version = tombstone.get('removal_version')
warning_text = tombstone.get('warning_text') or ''
warning_text = '{0} has been removed.{1}{2}'.format(fq_name, ' ' if warning_text else '', warning_text)
removed_msg = display.get_deprecation_message(msg=warning_text, version=removal_version,
date=removal_date, removed=True,
collection_name=acr.collection)
plugin_load_context.removal_date = removal_date
plugin_load_context.removal_version = removal_version
plugin_load_context.resolved = True
plugin_load_context.exit_reason = removed_msg
raise AnsiblePluginRemovedError(removed_msg, plugin_load_context=plugin_load_context)
redirect = routing_metadata.get('redirect', None)
if redirect:
# Prevent mystery redirects that would be determined by the collections keyword
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {fq_name}: {redirect}. "
"Redirects must use fully qualified collection names."
)
# FIXME: remove once this is covered in debug or whatever
display.vv("redirecting (type: {0}) {1} to {2}".format(plugin_type, fq_name, redirect))
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
if fq_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(fq_name)
return plugin_load_context.redirect(redirect)
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
if self.type == 'modules':
action_plugin = routing_metadata.get('action_plugin')
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
return plugin_load_context.nope('Python package {0} not found'.format(acr.n_python_package_name))
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return plugin_load_context.resolve(
full_name, to_text(n_resource_path), acr.collection, 'found exact match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
if extension:
# the request was extension-specific, don't try for an extensionless match
return plugin_load_context.nope('no match for {0} in {1}'.format(to_text(n_resource), acr.collection))
# look for any matching extension in the package location (sans filter)
found_files = [f
for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*')
if os.path.isfile(f) and not f.endswith(C.MODULE_IGNORE_EXTS)]
if not found_files:
return plugin_load_context.nope('failed fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
found_files = sorted(found_files) # sort to ensure deterministic results, with the shortest match first
if len(found_files) > 1:
display.debug('Found several possible candidates for the plugin but using first: %s' % ','.join(found_files))
return plugin_load_context.resolve(
full_name, to_text(found_files[0]), acr.collection,
'found fuzzy extension match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
result = self.find_plugin_with_context(name, mod_type, ignore_deprecated, check_aliases, collection_list)
if result.resolved and result.plugin_resolved_path:
return result.plugin_resolved_path
return None
def find_plugin_with_context(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name, returning contextual info about the load, recursively resolving redirection '''
plugin_load_context = PluginLoadContext()
plugin_load_context.original_name = name
while True:
result = self._resolve_plugin_step(name, mod_type, ignore_deprecated, check_aliases, collection_list, plugin_load_context=plugin_load_context)
if result.pending_redirect:
if result.pending_redirect in result.redirect_list:
raise AnsiblePluginCircularRedirect('plugin redirect loop resolving {0} (path: {1})'.format(result.original_name, result.redirect_list))
name = result.pending_redirect
result.pending_redirect = None
plugin_load_context = result
else:
break
# TODO: smuggle these to the controller when we're in a worker, reduce noise from normal things like missing plugin packages during collection search
if plugin_load_context.error_list:
display.warning("errors were encountered during the plugin load for {0}:\n{1}".format(name, plugin_load_context.error_list))
# TODO: display/return import_error_list? Only useful for forensics...
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# for dw in plugin_load_context.deprecation_warnings:
# # TODO: need to smuggle these to the controller if we're in a worker context
# display.warning('[DEPRECATION WARNING] ' + dw)
return plugin_load_context
# FIXME: name bikeshed
def _resolve_plugin_step(self, name, mod_type='', ignore_deprecated=False,
check_aliases=False, collection_list=None, plugin_load_context=PluginLoadContext()):
if not plugin_load_context:
raise ValueError('A PluginLoadContext is required')
plugin_load_context.redirect_list.append(name)
plugin_load_context.resolved = False
if name in _PLUGIN_FILTERS[self.package]:
plugin_load_context.exit_reason = '{0} matched a defined plugin filter'.format(name)
return plugin_load_context
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
for candidate_name in candidates:
try:
plugin_load_context.load_attempts.append(candidate_name)
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
plugin_load_context = self._find_plugin_legacy(name.removeprefix('ansible.legacy.'),
plugin_load_context, ignore_deprecated, check_aliases, suffix)
else:
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
plugin_load_context = self._find_fq_plugin(candidate_name, suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# collections: ['ns.coll']
if plugin_load_context.resolved and candidate_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(candidate_name)
if plugin_load_context.resolved or plugin_load_context.pending_redirect: # if we got an answer or need to chase down a redirect, return
return plugin_load_context
except (AnsiblePluginRemovedError, AnsiblePluginCircularRedirect, AnsibleCollectionUnsupportedVersionError):
# these are generally fatal, let them fly
raise
except ImportError as ie:
plugin_load_context.import_error_list.append(ie)
except Exception as ex:
# FIXME: keep actual errors, not just assembled messages
plugin_load_context.error_list.append(to_native(ex))
if plugin_load_context.error_list:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(plugin_load_context.error_list)))
plugin_load_context.exit_reason = 'no matches found for {0}'.format(name)
return plugin_load_context
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, plugin_load_context, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, plugin_load_context, ignore_deprecated=False, check_aliases=False, suffix=None):
"""Search library and various *_plugins paths in order to find the file.
This was behavior prior to the existence of collections.
"""
plugin_load_context.resolved = False
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = ('ansible.builtin.' + name if path_with_context.internal else name)
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator.
# We can use _get_paths_with_context() since add_directory() forces a cache refresh.
for path_with_context in (p for p in self._get_paths_with_context() if p.path not in self._searched_paths and os.path.isdir(to_bytes(p.path))):
path = path_with_context.path
b_path = to_bytes(path)
display.debug('trying %s' % path)
plugin_load_context.load_attempts.append(path)
internal = path_with_context.internal
try:
full_paths = (os.path.join(b_path, f) for f in os.listdir(b_path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (to_native(f) for f in full_paths if os.path.isfile(f) and not f.endswith(b'__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.MODULE_IGNORE_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# everything downstream expects unicode
full_path = to_text(full_path, errors='surrogate_or_strict')
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = PluginPathContext(full_path, internal)
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = PluginPathContext(full_path, internal)
self._searched_paths.add(path)
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + name if path_with_context.internal else name
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
path_with_context = pull_cache[alias_name]
if not ignore_deprecated and not os.path.islink(path_with_context.path):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = alias_name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + alias_name if path_with_context.internal else alias_name
plugin_load_context.resolved = True
return plugin_load_context
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
candidate_fqcr = 'ansible.builtin.{0}'.format(name)
if '.' not in name and AnsibleCollectionRef.is_valid_fqcr(candidate_fqcr):
return self._find_fq_plugin(fq_name=candidate_fqcr, extension=suffix, plugin_load_context=plugin_load_context, ignore_deprecated=ignore_deprecated)
return plugin_load_context.nope('{0} is not eligible for last-chance resolution'.format(name))
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
# FIXME: this still has issues if the module was previously imported but not "cached",
# we should bypass this entire codepath for things that are directly importable
warnings.simplefilter("ignore", RuntimeWarning)
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
# mimic import machinery; make the module-being-loaded available in sys.modules during import
# and remove if there's a failure...
sys.modules[full_name] = module
try:
spec.loader.exec_module(module)
except Exception:
del sys.modules[full_name]
raise
return module
def _update_object(self, obj, name, path, redirected_names=None, resolved=None):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
setattr(obj, '_redirected_names', redirected_names or [])
names = []
if resolved:
names.append(resolved)
if redirected_names:
# reverse list so best name comes first
names.extend(redirected_names[::-1])
if not names:
raise AnsibleError(f"Missing FQCN for plugin source {name}")
setattr(obj, 'ansible_aliases', names)
setattr(obj, 'ansible_name', names[0])
def get(self, name, *args, **kwargs):
return self.get_with_context(name, *args, **kwargs).object
def get_with_context(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
if self._plugin_instance_cache and (cached_load_result := self._plugin_instance_cache.get(name)):
# Resolving the FQCN is slow, even if we've passed in the resolved FQCN.
# Short-circuit here if we've previously resolved this name.
# This will need to be restricted if non-vars plugins start using the cache, since
# some non-fqcn plugin need to be resolved again with the collections list.
return get_with_context_result(*cached_load_result)
plugin_load_context = self.find_plugin_with_context(name, collection_list=collection_list)
if not plugin_load_context.resolved or not plugin_load_context.plugin_resolved_path:
# FIXME: this is probably an error (eg removed plugin)
return get_with_context_result(None, plugin_load_context)
fq_name = plugin_load_context.resolved_fqcn
if '.' not in fq_name and plugin_load_context.plugin_resolved_collection:
fq_name = '.'.join((plugin_load_context.plugin_resolved_collection, fq_name))
resolved_type_name = plugin_load_context.plugin_resolved_name
path = plugin_load_context.plugin_resolved_path
if self._plugin_instance_cache and (cached_load_result := self._plugin_instance_cache.get(fq_name)):
# This is unused by vars plugins, but it's here in case the instance cache expands to other plugin types.
# We get here if we've seen this plugin before, but it wasn't called with the resolved FQCN.
return get_with_context_result(*cached_load_result)
redirected_names = plugin_load_context.redirect_list or []
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(resolved_type_name, path)
found_in_cache = False
self._load_config_defs(resolved_type_name, self._module_cache[path], path)
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return get_with_context_result(None, plugin_load_context)
if not issubclass(obj, plugin_class):
return get_with_context_result(None, plugin_load_context)
# FIXME: update this to use the load context
self._display_plugin_load(self.class_name, resolved_type_name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
instance = object.__new__(obj)
self._update_object(instance, resolved_type_name, path, redirected_names, fq_name)
obj.__init__(instance, *args, **kwargs) # pylint: disable=unnecessary-dunder-call
obj = instance
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class or incomplete plugin, don't load
display.v('Returning not found on "%s" as it has unimplemented abstract methods; %s' % (resolved_type_name, to_native(e)))
return get_with_context_result(None, plugin_load_context)
raise
self._update_object(obj, resolved_type_name, path, redirected_names, fq_name)
if self._plugin_instance_cache is not None and getattr(obj, 'is_stateless', False):
# store under both the originally requested name and the resolved FQ name
self._plugin_instance_cache[name] = self._plugin_instance_cache[fq_name] = (obj, plugin_load_context)
return get_with_context_result(obj, plugin_load_context)
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type, in configured paths (no collections)
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
legacy_excluding_builtin = set()
for path_with_context in self._get_paths_with_context():
matches = glob.glob(to_native(os.path.join(path_with_context.path, "*.py")))
if not path_with_context.internal:
legacy_excluding_builtin.update(matches)
# we sort within each path, but keep path precedence from config
all_matches.extend(sorted(matches, key=os.path.basename))
loaded_modules = set()
for path in all_matches:
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename in _PLUGIN_FILTERS[self.package]:
display.debug("'%s' skipped due to a defined plugin filter" % basename)
continue
if basename == '__init__' or (basename == 'base' and self.package == 'ansible.plugins.cache'):
# cache has legacy 'base.py' file, which is wrapper for __init__.py
display.debug("'%s' skipped due to reserved name" % basename)
continue
if dedupe and basename in loaded_modules:
display.debug("'%s' skipped as duplicate" % basename)
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path in legacy_excluding_builtin:
fqcn = basename
else:
fqcn = f"ansible.builtin.{basename}"
if self._plugin_instance_cache is not None and fqcn in self._plugin_instance_cache:
# Here just in case, but we don't call all() multiple times for vars plugins, so this should not be used.
yield self._plugin_instance_cache[basename][0]
continue
if path not in self._module_cache:
if self.type in ('filter', 'test'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
try:
module = self._load_module_source(full_name, path)
except Exception as e:
display.warning("Skipping plugin (%s), cannot load: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
else:
module = self._module_cache[path]
self._load_config_defs(basename, module, path)
try:
obj = getattr(module, self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
self._update_object(obj, basename, path, resolved=fqcn)
if self._plugin_instance_cache is not None and fqcn not in self._plugin_instance_cache:
# Use get_with_context to cache the plugin the first time we see it.
self.get_with_context(fqcn)[0]
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
We need to do a few things differently in the base class because of file == plugin
assumptions and dedupe logic.
"""
def __init__(self, class_name, package, config, subdir, plugin_wrapper_type, aliases=None, required_base_class=None):
super(Jinja2Loader, self).__init__(class_name, package, config, subdir, aliases=aliases, required_base_class=required_base_class)
self._plugin_wrapper_type = plugin_wrapper_type
self._cached_non_collection_wrappers = {}
def _clear_caches(self):
super(Jinja2Loader, self)._clear_caches()
self._cached_non_collection_wrappers = {}
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
raise NotImplementedError('find_plugin is not supported on Jinja2Loader')
@property
def method_map_name(self):
return get_plugin_class(self.class_name) + 's'
def get_contained_plugins(self, collection, plugin_path, name):
plugins = []
full_name = '.'.join(['ansible_collections', collection, 'plugins', self.type, name])
try:
# use 'parent' loader class to find files, but cannot return this as it can contain multiple plugins per file
if plugin_path not in self._module_cache:
self._module_cache[plugin_path] = self._load_module_source(full_name, plugin_path)
module = self._module_cache[plugin_path]
obj = getattr(module, self.class_name)
except Exception as e:
raise KeyError('Failed to load %s for %s: %s' % (plugin_path, collection, to_native(e)))
plugin_impl = obj()
if plugin_impl is None:
raise KeyError('Could not find %s.%s' % (collection, name))
try:
method_map = getattr(plugin_impl, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning("Ignoring %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(plugin_path), e))
return plugins
for func_name, func in plugin_map:
fq_name = '.'.join((collection, func_name))
full = '.'.join((full_name, func_name))
plugin = self._plugin_wrapper_type(func)
if plugin in plugins:
continue
self._update_object(plugin, full, plugin_path, resolved=fq_name)
plugins.append(plugin)
return plugins
# FUTURE: now that the resulting plugins are closer, refactor base class method with some extra
# hooks so we can avoid all the duplicated plugin metadata logic, and also cache the collection results properly here
def get_with_context(self, name, *args, **kwargs):
# pop N/A kwargs to avoid passthrough to parent methods
kwargs.pop('class_only', False)
kwargs.pop('collection_list', None)
context = PluginLoadContext()
# avoid collection path for legacy
name = name.removeprefix('ansible.legacy.')
self._ensure_non_collection_wrappers(*args, **kwargs)
# check for stuff loaded via legacy/builtin paths first
if known_plugin := self._cached_non_collection_wrappers.get(name):
context.resolved = True
context.plugin_resolved_name = name
context.plugin_resolved_path = known_plugin._original_path
context.plugin_resolved_collection = 'ansible.builtin' if known_plugin.ansible_name.startswith('ansible.builtin.') else ''
context._resolved_fqcn = known_plugin.ansible_name
return get_with_context_result(known_plugin, context)
plugin = None
key, leaf_key = get_fqcr_and_name(name)
seen = set()
# follow the meta!
while True:
if key in seen:
raise AnsibleError('recursive collection redirect found for %r' % name, 0)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self.type)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
try:
ts = _get_collection_metadata(acr.collection)
except ValueError as e:
# no collection
raise KeyError('Invalid plugin FQCN ({0}): {1}'.format(key, to_native(e)))
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self.type, {}).get(leaf_key, {})
# check deprecations
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text') or ''
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
warning_text = f'{self.type.title()} "{key}" has been deprecated.{" " if warning_text else ""}{warning_text}'
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
# check removal
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text') or ''
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
warning_text = f'{self.type.title()} "{key}" has been removed.{" " if warning_text else ""}{warning_text}'
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
# check redirects
redirect = routing_entry.get('redirect', None)
if redirect:
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {acr.collection}.{acr.resource}: {redirect}. "
"Redirects must use fully qualified collection names."
)
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self.type, acr.collection, acr.resource, next_key))
key = next_key
else:
break
try:
pkg = import_module(acr.n_python_package_name)
except ImportError as e:
raise KeyError(to_native(e))
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
try:
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
# use 'parent' loader class to find files, but cannot return this as it can contain
# multiple plugins per file
plugin_impl = super(Jinja2Loader, self).get_with_context(module_name, *args, **kwargs)
method_map = getattr(plugin_impl.object, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning(f"Skipping {self.type} plugins in {module_name}'; an error occurred while loading: {e}")
continue
for func_name, func in plugin_map:
fq_name = '.'.join((parent_prefix, func_name))
src_name = f"ansible_collections.{acr.collection}.plugins.{self.type}.{acr.subdirs}.{func_name}"
# TODO: load anyways into CACHE so we only match each at end of loop
# the files themseves should already be cached by base class caching of modules(python)
if key in (func_name, fq_name):
plugin = self._plugin_wrapper_type(func)
if plugin:
context = plugin_impl.plugin_load_context
self._update_object(plugin, src_name, plugin_impl.object._original_path, resolved=fq_name)
# FIXME: once we start caching these results, we'll be missing functions that would have loaded later
break # go to next file as it can override if dupe (dont break both loops)
except AnsiblePluginRemovedError as apre:
raise AnsibleError(to_native(apre), 0, orig_exc=apre)
except (AnsibleError, KeyError):
raise
except Exception as ex:
display.warning('An unexpected error occurred during Jinja2 plugin loading: {0}'.format(to_native(ex)))
display.vvv('Unexpected error during Jinja2 plugin loading: {0}'.format(format_exc()))
raise AnsibleError(to_native(ex), 0, orig_exc=ex)
return get_with_context_result(plugin, context)
def all(self, *args, **kwargs):
kwargs.pop('_dedupe', None)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False) # basically ignored for test/filters since they are functions
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
self._ensure_non_collection_wrappers(*args, **kwargs)
if path_only:
yield from (w._original_path for w in self._cached_non_collection_wrappers.values())
else:
yield from (w for w in self._cached_non_collection_wrappers.values())
def _ensure_non_collection_wrappers(self, *args, **kwargs):
if self._cached_non_collection_wrappers:
return
# get plugins from files in configured paths (multiple in each)
for p_map in super(Jinja2Loader, self).all(*args, **kwargs):
is_builtin = p_map.ansible_name.startswith('ansible.builtin.')
# p_map is really object from file with class that holds multiple plugins
plugins_list = getattr(p_map, self.method_map_name)
try:
plugins = plugins_list()
except Exception as e:
display.vvvv("Skipping %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(p_map._original_path), e))
continue
for plugin_name in plugins.keys():
if '.' in plugin_name:
display.debug(f'{plugin_name} skipped in {p_map._original_path}; Jinja plugin short names may not contain "."')
continue
if plugin_name in _PLUGIN_FILTERS[self.package]:
display.debug("%s skipped due to a defined plugin filter" % plugin_name)
continue
# the plugin class returned by the loader may host multiple Jinja plugins, but we wrap each plugin in
# its own surrogate wrapper instance here to ease the bookkeeping...
wrapper = self._plugin_wrapper_type(plugins[plugin_name])
fqcn = plugin_name
collection = '.'.join(p_map.ansible_name.split('.')[:2]) if p_map.ansible_name.count('.') >= 2 else ''
if not plugin_name.startswith(collection):
fqcn = f"{collection}.{plugin_name}"
self._update_object(wrapper, plugin_name, p_map._original_path, resolved=fqcn)
target_names = {plugin_name, fqcn}
if is_builtin:
target_names.add(f'ansible.builtin.{plugin_name}')
for target_name in target_names:
if existing_plugin := self._cached_non_collection_wrappers.get(target_name):
display.debug(f'Jinja plugin {target_name} from {p_map._original_path} skipped; '
f'shadowed by plugin from {existing_plugin._original_path})')
continue
self._cached_non_collection_wrappers[target_name] = wrapper
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
def _load_plugin_filter():
filters = _PLUGIN_FILTERS
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
# Modules and action plugins share the same reject list since the difference between the
# two isn't visible to the users
if version == u'1.0':
if 'module_blacklist' in filter_data:
display.deprecated("'module_blacklist' is being removed in favor of 'module_rejectlist'", version='2.18')
if 'module_rejectlist' not in filter_data:
filter_data['module_rejectlist'] = filter_data['module_blacklist']
del filter_data['module_blacklist']
try:
filters['ansible.modules'] = frozenset(filter_data['module_rejectlist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_rejectlist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is rejected
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module reject list file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the reject list.'.format(to_native(filter_cfg)))
return filters
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
def _on_collection_load_handler(collection_name, collection_path):
display.vvvv(to_text('Loading collection {0} from {1}'.format(collection_name, collection_path)))
collection_meta = _get_collection_metadata(collection_name)
try:
if not _does_collection_support_ansible_version(collection_meta.get('requires_ansible', ''), ansible_version):
mismatch_behavior = C.config.get_config_value('COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH')
message = 'Collection {0} does not support Ansible version {1}'.format(collection_name, ansible_version)
if mismatch_behavior == 'warning':
display.warning(message)
elif mismatch_behavior == 'error':
raise AnsibleCollectionUnsupportedVersionError(message)
except AnsibleError:
raise
except Exception as ex:
display.warning('Error parsing collection metadata requires_ansible value from collection {0}: {1}'.format(collection_name, ex))
def _does_collection_support_ansible_version(requirement_string, ansible_version):
if not requirement_string:
return True
if not SpecifierSet:
display.warning('packaging Python module unavailable; unable to validate collection Ansible version requirements')
return True
ss = SpecifierSet(requirement_string)
# ignore prerelease/postrelease/beta/dev flags for simplicity
base_ansible_version = Version(ansible_version).base_version
return ss.contains(base_ansible_version)
def _configure_collection_loader(prefix_collections_path=None):
if AnsibleCollectionConfig.collection_finder:
# this must be a Python warning so that it can be filtered out by the import sanity test
warnings.warn('AnsibleCollectionFinder has already been configured')
return
if prefix_collections_path is None:
prefix_collections_path = []
paths = list(prefix_collections_path) + C.COLLECTIONS_PATHS
finder = _AnsibleCollectionFinder(paths, C.COLLECTIONS_SCAN_SYS_PATH)
finder._install()
# this should succeed now
AnsibleCollectionConfig.on_collection_load += _on_collection_load_handler
def init_plugin_loader(prefix_collections_path=None):
"""Initialize the plugin filters and the collection loaders
This method must be called to configure and insert the collection python loaders
into ``sys.meta_path`` and ``sys.path_hooks``.
This method is only called in ``CLI.run`` after CLI args have been parsed, so that
instantiation of the collection finder can utilize parsed CLI args, and to not cause
side effects.
"""
_load_plugin_filter()
_configure_collection_loader(prefix_collections_path)
# TODO: Evaluate making these class instantiations lazy, but keep them in the global scope
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
AnsibleJinja2Filter
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins',
AnsibleJinja2Test
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/file_collision/play.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/file_collision/roles/r1/filter_plugins/custom.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/file_collision/roles/r1/filter_plugins/filter1.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/file_collision/roles/r1/filter_plugins/filter3.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/file_collision/roles/r2/filter_plugins/custom.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/file_collision/roles/r2/filter_plugins/filter2.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,897 |
Improved Jinja plugin caching breaks loading multiple custom filter plugins with same name
|
### Summary
Due to the recent issues with the default AWX execution environment with Ansible Galaxy collections (see https://github.com/ansible/awx/issues/14495#issuecomment-1746383397), a new AWX EE has been shipped which contains Ansible Core `v2.15.5rc1`, to which we also upgraded.
This unfortunately ended up breaking various playbooks on our side, which made use of custom Jinja filters stored within individual role directories. As an example, here is the error message for a dummy role which uses an existing filter named `hello`:
```
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
```
After a bit of digging and testing various constellations, I noticed that the issue has been introduced between `v2.15.4` and `v2.15.5rc1`, specifically by this PR: https://github.com/ansible/ansible/pull/79781
The issue only appears under the following conditions:
- At least two roles exist, each with their own `filter_plugins` directory
- At least two roles use the same name for the Python module which implements the custom filter(s), e.g. `custom.py`
- At least one role with a custom filter plugin that has the same name is being executed BEFORE the role which uses a custom filter is executed
This will then result in an error message when running the second role which states that the filter could not be loaded. As a workaround, the issue can be prevented by giving each filter plugin module its unique filename (unique across **all** roles), e.g. `custom1.py` and `custom2.py`
Last but not least, I also reproduced this issue when running the latest `develop` branch and verified that reverting the [merge commit for #79781](https://github.com/ansible/ansible/commit/dd79c49a4de3a6dd5bd9d31503bd7846475e8e57) fixes the issue, so it seems like this updated cache routine ended up breaking this functionality.
I created a [reproducer repository on GitHub](https://github.com/ppmathis/ansible-plugin-issue) for further reference.
### Issue Type
Bug Report
### Component Name
loader
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5rc1]
config file = None
configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Aug 9 2023, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Version information and configuration dump based on `quay.io/ansible/awx-ee:latest` with digest `921344c370b8844de83f693773853fab2f754ae738f6dee4ee5e101d8ee760eb` (ships `v2.15.5rc1` as of today), but issue was also reproduced with current `develop` branch from Ansible Core.
Other versions like OS etc. do not really matter, it's an issue within the plugin loader of Ansible Core and can be easily reproduced anywhere, including blank Python container images.
### Steps to Reproduce
I created a reproducer repository at https://github.com/ppmathis/ansible-plugin-issue which has a minimal example for triggering this issue with `v2.15.5rc1`. Alternatively, you can reproduce this structure yourself:
1. Create a role `first-role` with a custom filter plugin module named `custom.py` and write any custom filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `goodbye`.
2. Create a second role `second-role` with a custom filter plugin module which is also named `custom.py` and write another filter. Add a task file which uses this filter somehow. In my reproducer repository, I called the filter `hello`.
3. Create a new playbook which includes both roles.
4. Run this playbook using `ansible-playbook` with no specific flags or options.
### Expected Results
Both roles should be able to use the custom filters without any issue, even when the respective Python modules have the same filename.
### Actual Results
```console
ansible-playbook [core 2.15.5rc1]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /app/venv/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /app/venv/bin/ansible-playbook
python version = 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] (/app/venv/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /app/venv/lib/python3.12/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: site.yml ***************************************************************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in site.yml
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************
task path: /app/site.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" && echo ansible-tmp-1696435231.9647672-10-91811106198726="` echo /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726 `" ) && sleep 0'
Using module file /app/venv/lib/python3.12/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1gjwhoirf/tmprgv1l_0w TO /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/venv/bin/python /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1696435231.9647672-10-91811106198726/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [first-role : ansible.builtin.debug] ****************************************************************************************************************************************************************************************************************************
task path: /app/roles/first-role/tasks/main.yml:2
ok: [localhost] => {
"msg": "Goodbye Ansible!"
}
TASK [second-role : ansible.builtin.debug] ***************************************************************************************************************************************************************************************************************************
task path: /app/roles/second-role/tasks/main.yml:2
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: Could not load \"hello\": 'hello'. String: {{ \"Ansible\" | hello(\"!\") }}. Could not load \"hello\": 'hello'"
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81897
|
https://github.com/ansible/ansible/pull/82002
|
e933d9d8a6155478ce99518d111220e680201ca2
|
b4566c18b3b0640d62c52e5ab43a4b7d64a9ddfc
| 2023-10-04T16:35:38Z |
python
| 2023-10-20T23:00:41Z |
test/integration/targets/plugin_loader/runme.sh
|
#!/usr/bin/env bash
set -ux
cleanup() {
unlink normal/library/_symlink.py
}
pushd normal/library
ln -s _underscore.py _symlink.py
popd
trap 'cleanup' EXIT
# check normal execution
for myplay in normal/*.yml
do
ansible-playbook "${myplay}" -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run ${myplay} normally"
exit 1
fi
done
# check overrides
for myplay in override/*.yml
do
ansible-playbook "${myplay}" -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run ${myplay} override"
exit 1
fi
done
# test config loading
ansible-playbook use_coll_name.yml -i ../../inventory -e 'ansible_connection=ansible.builtin.ssh' "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,701 |
ansible-pull does not utilize the --become-password-file
|
### Summary
When I try to use the --become-password-file flag in the command
```
> ansible-pull -i "$(hostname)," -U {repo_link} local.yml
```
I get the following error:
```
> module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
when attempting to use the playbook:
```
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
I believe this is a bug because the [pull.py](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) does not seem to utilize the [ask_passwords](https://github.com/ansible/ansible/blob/650befed37eadcaea735673638d5475fa957ca7e/lib/ansible/cli/__init__.py#L328) function, nor build the --become-password-file into the [cmd](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) variable.
### Issue Type
Bug Report
### Component Name
pull.py
### Ansible Version
```console
$ ansible --version
aansible --version
ansible [core 2.12.6]
config file = /Users/{user}/ansible.cfg
configured module search path = ['/Users/{user}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
ansible collection location = /Users/{user}/.ansible/collections:/usr/share/ansible/collections
executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible
python version = 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_FORKS(/Users/{user}/ansible.cfg) = 100
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/{user}/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/Users/{user}/ansible.cfg) = yaml
HOST_KEY_CHECKING(/Users/{user}/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
```
### OS / Environment
macOS Monterey 12.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
run the command: ```ansible-pull -i "$(hostname)," -U {repo_link} local.yml```
### Expected Results
I expect a successful playbook with an output of the current uptime.
### Actual Results
```console
TASK [shell] *******************************************************************
fatal: [{HOSTNAME}]: FAILED! => changed=false
module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78701
|
https://github.com/ansible/ansible/pull/82009
|
9ee603d1c520cfa4fb95a9fe5cfddcae6141c0ac
|
99e0d25857ad65764909c6ec701a04930ea5c21f
| 2022-09-02T21:03:48Z |
python
| 2023-10-24T13:20:25Z |
changelogs/fragments/pull_file_secrets.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,701 |
ansible-pull does not utilize the --become-password-file
|
### Summary
When I try to use the --become-password-file flag in the command
```
> ansible-pull -i "$(hostname)," -U {repo_link} local.yml
```
I get the following error:
```
> module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
when attempting to use the playbook:
```
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
I believe this is a bug because the [pull.py](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) does not seem to utilize the [ask_passwords](https://github.com/ansible/ansible/blob/650befed37eadcaea735673638d5475fa957ca7e/lib/ansible/cli/__init__.py#L328) function, nor build the --become-password-file into the [cmd](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) variable.
### Issue Type
Bug Report
### Component Name
pull.py
### Ansible Version
```console
$ ansible --version
aansible --version
ansible [core 2.12.6]
config file = /Users/{user}/ansible.cfg
configured module search path = ['/Users/{user}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
ansible collection location = /Users/{user}/.ansible/collections:/usr/share/ansible/collections
executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible
python version = 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_FORKS(/Users/{user}/ansible.cfg) = 100
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/{user}/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/Users/{user}/ansible.cfg) = yaml
HOST_KEY_CHECKING(/Users/{user}/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
```
### OS / Environment
macOS Monterey 12.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
run the command: ```ansible-pull -i "$(hostname)," -U {repo_link} local.yml```
### Expected Results
I expect a successful playbook with an output of the current uptime.
### Actual Results
```console
TASK [shell] *******************************************************************
fatal: [{HOSTNAME}]: FAILED! => changed=false
module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78701
|
https://github.com/ansible/ansible/pull/82009
|
9ee603d1c520cfa4fb95a9fe5cfddcae6141c0ac
|
99e0d25857ad65764909c6ec701a04930ea5c21f
| 2022-09-02T21:03:48Z |
python
| 2023-10-24T13:20:25Z |
lib/ansible/cli/pull.py
|
#!/usr/bin/env python
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import annotations
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import datetime
import os
import platform
import random
import shlex
import shutil
import socket
import sys
import time
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.plugins.loader import module_loader
from ansible.utils.cmd_functions import run_cmd
from ansible.utils.display import Display
display = Display()
class PullCLI(CLI):
''' Used to pull a remote copy of ansible on each managed node,
each set to run via cron and update playbook source via a source repository.
This inverts the default *push* architecture of ansible into a *pull* architecture,
which has near-limitless scaling potential.
None of the CLI tools are designed to run concurrently with themselves,
you should use an external scheduler and/or locking to ensure there are no clashing operations.
The setup playbook can be tuned to change the cron frequency, logging locations, and parameters to ansible-pull.
This is useful both for extreme scale-out as well as periodic remediation.
Usage of the 'fetch' module to retrieve logs from ansible-pull runs would be an
excellent way to gather and analyze remote logs from ansible-pull.
'''
name = 'ansible-pull'
DEFAULT_REPO_TYPE = 'git'
DEFAULT_PLAYBOOK = 'local.yml'
REPO_CHOICES = ('git', 'subversion', 'hg', 'bzr')
PLAYBOOK_ERRORS = {
1: 'File does not exist',
2: 'File is not readable',
}
ARGUMENTS = {'playbook.yml': 'The name of one the YAML format files to run as an Ansible playbook.'
'This can be a relative path within the checkout. By default, Ansible will'
"look for a playbook based on the host's fully-qualified domain name,"
'on the host hostname and finally a playbook named *local.yml*.', }
SKIP_INVENTORY_DEFAULTS = True
@staticmethod
def _get_inv_cli():
inv_opts = ''
if context.CLIARGS.get('inventory', False):
for inv in context.CLIARGS['inventory']:
if isinstance(inv, list):
inv_opts += " -i '%s' " % ','.join(inv)
elif ',' in inv or os.path.exists(inv):
inv_opts += ' -i %s ' % inv
return inv_opts
def init_parser(self):
''' create an options parser for bin/ansible '''
super(PullCLI, self).init_parser(
usage='%prog -U <repository> [options] [<playbook.yml>]',
desc="pulls playbooks from a VCS repo and executes them on target host")
# Do not add check_options as there's a conflict with --checkout/-C
opt_help.add_connect_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_runtask_options(self.parser)
opt_help.add_subset_options(self.parser)
opt_help.add_inventory_options(self.parser)
opt_help.add_module_options(self.parser)
opt_help.add_runas_prompt_options(self.parser)
self.parser.add_argument('args', help='Playbook(s)', metavar='playbook.yml', nargs='*')
# options unique to pull
self.parser.add_argument('--purge', default=False, action='store_true', help='purge checkout after playbook run')
self.parser.add_argument('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true',
help='only run the playbook if the repository has been updated')
self.parser.add_argument('-s', '--sleep', dest='sleep', default=None,
help='sleep for random interval (between 0 and n number of seconds) before starting. '
'This is a useful way to disperse git requests')
self.parser.add_argument('-f', '--force', dest='force', default=False, action='store_true',
help='run the playbook even if the repository could not be updated')
self.parser.add_argument('-d', '--directory', dest='dest', default=None,
help='absolute path of repository checkout directory (relative paths are not supported)')
self.parser.add_argument('-U', '--url', dest='url', default=None, help='URL of the playbook repository')
self.parser.add_argument('--full', dest='fullclone', action='store_true', help='Do a full clone, instead of a shallow one.')
self.parser.add_argument('-C', '--checkout', dest='checkout',
help='branch/tag/commit to checkout. Defaults to behavior of repository module.')
self.parser.add_argument('--accept-host-key', default=False, dest='accept_host_key', action='store_true',
help='adds the hostkey for the repo url if not already added')
self.parser.add_argument('-m', '--module-name', dest='module_name', default=self.DEFAULT_REPO_TYPE,
help='Repository module name, which ansible will use to check out the repo. Choices are %s. Default is %s.'
% (self.REPO_CHOICES, self.DEFAULT_REPO_TYPE))
self.parser.add_argument('--verify-commit', dest='verify', default=False, action='store_true',
help='verify GPG signature of checked out commit, if it fails abort running the playbook. '
'This needs the corresponding VCS module to support such an operation')
self.parser.add_argument('--clean', dest='clean', default=False, action='store_true',
help='modified files in the working repository will be discarded')
self.parser.add_argument('--track-subs', dest='tracksubs', default=False, action='store_true',
help='submodules will track the latest changes. This is equivalent to specifying the --remote flag to git submodule update')
# add a subset of the check_opts flag group manually, as the full set's
# shortcodes conflict with above --checkout/-C
self.parser.add_argument("--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
self.parser.add_argument("--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those files; works great with --check")
def post_process_args(self, options):
options = super(PullCLI, self).post_process_args(options)
if not options.dest:
hostname = socket.getfqdn()
# use a hostname dependent directory, in case of $HOME on nfs
options.dest = os.path.join(C.ANSIBLE_HOME, 'pull', hostname)
options.dest = os.path.expandvars(os.path.expanduser(options.dest))
if os.path.exists(options.dest) and not os.path.isdir(options.dest):
raise AnsibleOptionsError("%s is not a valid or accessible directory." % options.dest)
if options.sleep:
try:
secs = random.randint(0, int(options.sleep))
options.sleep = secs
except ValueError:
raise AnsibleOptionsError("%s is not a number." % options.sleep)
if not options.url:
raise AnsibleOptionsError("URL for repository not specified, use -h for help")
if options.module_name not in self.REPO_CHOICES:
raise AnsibleOptionsError("Unsupported repo module %s, choices are %s" % (options.module_name, ','.join(self.REPO_CHOICES)))
display.verbosity = options.verbosity
self.validate_conflicts(options)
return options
def run(self):
''' use Runner lib to do SSH things '''
super(PullCLI, self).run()
# log command line
now = datetime.datetime.now()
display.display(now.strftime("Starting Ansible Pull at %F %T"))
display.display(' '.join(sys.argv))
# Build Checkout command
# Now construct the ansible command
node = platform.node()
host = socket.getfqdn()
hostnames = ','.join(set([host, node, host.split('.')[0], node.split('.')[0]]))
if hostnames:
limit_opts = 'localhost,%s,127.0.0.1' % hostnames
else:
limit_opts = 'localhost,127.0.0.1'
base_opts = '-c local '
if context.CLIARGS['verbosity'] > 0:
base_opts += ' -%s' % ''.join(["v" for x in range(0, context.CLIARGS['verbosity'])])
# Attempt to use the inventory passed in as an argument
# It might not yet have been downloaded so use localhost as default
inv_opts = self._get_inv_cli()
if not inv_opts:
inv_opts = " -i localhost, "
# avoid interpreter discovery since we already know which interpreter to use on localhost
inv_opts += '-e %s ' % shlex.quote('ansible_python_interpreter=%s' % sys.executable)
# SCM specific options
if context.CLIARGS['module_name'] == 'git':
repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' version=%s' % context.CLIARGS['checkout']
if context.CLIARGS['accept_host_key']:
repo_opts += ' accept_hostkey=yes'
if context.CLIARGS['private_key_file']:
repo_opts += ' key_file=%s' % context.CLIARGS['private_key_file']
if context.CLIARGS['verify']:
repo_opts += ' verify_commit=yes'
if context.CLIARGS['tracksubs']:
repo_opts += ' track_submodules=yes'
if not context.CLIARGS['fullclone']:
repo_opts += ' depth=1'
elif context.CLIARGS['module_name'] == 'subversion':
repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' revision=%s' % context.CLIARGS['checkout']
if not context.CLIARGS['fullclone']:
repo_opts += ' export=yes'
elif context.CLIARGS['module_name'] == 'hg':
repo_opts = "repo=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' revision=%s' % context.CLIARGS['checkout']
elif context.CLIARGS['module_name'] == 'bzr':
repo_opts = "name=%s dest=%s" % (context.CLIARGS['url'], context.CLIARGS['dest'])
if context.CLIARGS['checkout']:
repo_opts += ' version=%s' % context.CLIARGS['checkout']
else:
raise AnsibleOptionsError('Unsupported (%s) SCM module for pull, choices are: %s'
% (context.CLIARGS['module_name'],
','.join(self.REPO_CHOICES)))
# options common to all supported SCMS
if context.CLIARGS['clean']:
repo_opts += ' force=yes'
path = module_loader.find_plugin(context.CLIARGS['module_name'])
if path is None:
raise AnsibleOptionsError(("module '%s' not found.\n" % context.CLIARGS['module_name']))
bin_path = os.path.dirname(os.path.abspath(sys.argv[0]))
# hardcode local and inventory/host as this is just meant to fetch the repo
cmd = '%s/ansible %s %s -m %s -a "%s" all -l "%s"' % (bin_path, inv_opts, base_opts,
context.CLIARGS['module_name'],
repo_opts, limit_opts)
for ev in context.CLIARGS['extra_vars']:
cmd += ' -e %s' % shlex.quote(ev)
# Nap?
if context.CLIARGS['sleep']:
display.display("Sleeping for %d seconds..." % context.CLIARGS['sleep'])
time.sleep(context.CLIARGS['sleep'])
# RUN the Checkout command
display.debug("running ansible with VCS module to checkout repo")
display.vvvv('EXEC: %s' % cmd)
rc, b_out, b_err = run_cmd(cmd, live=True)
if rc != 0:
if context.CLIARGS['force']:
display.warning("Unable to update repository. Continuing with (forced) run of playbook.")
else:
return rc
elif context.CLIARGS['ifchanged'] and b'"changed": true' not in b_out:
display.display("Repository has not changed, quitting.")
return 0
playbook = self.select_playbook(context.CLIARGS['dest'])
if playbook is None:
raise AnsibleOptionsError("Could not find a playbook to run.")
# Build playbook command
cmd = '%s/ansible-playbook %s %s' % (bin_path, base_opts, playbook)
if context.CLIARGS['vault_password_files']:
for vault_password_file in context.CLIARGS['vault_password_files']:
cmd += " --vault-password-file=%s" % vault_password_file
if context.CLIARGS['vault_ids']:
for vault_id in context.CLIARGS['vault_ids']:
cmd += " --vault-id=%s" % vault_id
for ev in context.CLIARGS['extra_vars']:
cmd += ' -e %s' % shlex.quote(ev)
if context.CLIARGS['become_ask_pass']:
cmd += ' --ask-become-pass'
if context.CLIARGS['skip_tags']:
cmd += ' --skip-tags "%s"' % to_native(u','.join(context.CLIARGS['skip_tags']))
if context.CLIARGS['tags']:
cmd += ' -t "%s"' % to_native(u','.join(context.CLIARGS['tags']))
if context.CLIARGS['subset']:
cmd += ' -l "%s"' % context.CLIARGS['subset']
else:
cmd += ' -l "%s"' % limit_opts
if context.CLIARGS['check']:
cmd += ' -C'
if context.CLIARGS['diff']:
cmd += ' -D'
os.chdir(context.CLIARGS['dest'])
# redo inventory options as new files might exist now
inv_opts = self._get_inv_cli()
if inv_opts:
cmd += inv_opts
# RUN THE PLAYBOOK COMMAND
display.debug("running ansible-playbook to do actual work")
display.debug('EXEC: %s' % cmd)
rc, b_out, b_err = run_cmd(cmd, live=True)
if context.CLIARGS['purge']:
os.chdir('/')
try:
shutil.rmtree(context.CLIARGS['dest'])
except Exception as e:
display.error(u"Failed to remove %s: %s" % (context.CLIARGS['dest'], to_text(e)))
return rc
@staticmethod
def try_playbook(path):
if not os.path.exists(path):
return 1
if not os.access(path, os.R_OK):
return 2
return 0
@staticmethod
def select_playbook(path):
playbook = None
errors = []
if context.CLIARGS['args'] and context.CLIARGS['args'][0] is not None:
playbooks = []
for book in context.CLIARGS['args']:
book_path = os.path.join(path, book)
rc = PullCLI.try_playbook(book_path)
if rc != 0:
errors.append("%s: %s" % (book_path, PullCLI.PLAYBOOK_ERRORS[rc]))
continue
playbooks.append(book_path)
if 0 < len(errors):
display.warning("\n".join(errors))
elif len(playbooks) == len(context.CLIARGS['args']):
playbook = " ".join(playbooks)
return playbook
else:
fqdn = socket.getfqdn()
hostpb = os.path.join(path, fqdn + '.yml')
shorthostpb = os.path.join(path, fqdn.split('.')[0] + '.yml')
localpb = os.path.join(path, PullCLI.DEFAULT_PLAYBOOK)
for pb in [hostpb, shorthostpb, localpb]:
rc = PullCLI.try_playbook(pb)
if rc == 0:
playbook = pb
break
else:
errors.append("%s: %s" % (pb, PullCLI.PLAYBOOK_ERRORS[rc]))
if playbook is None:
display.warning("\n".join(errors))
return playbook
def main(args=None):
PullCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,701 |
ansible-pull does not utilize the --become-password-file
|
### Summary
When I try to use the --become-password-file flag in the command
```
> ansible-pull -i "$(hostname)," -U {repo_link} local.yml
```
I get the following error:
```
> module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
when attempting to use the playbook:
```
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
I believe this is a bug because the [pull.py](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) does not seem to utilize the [ask_passwords](https://github.com/ansible/ansible/blob/650befed37eadcaea735673638d5475fa957ca7e/lib/ansible/cli/__init__.py#L328) function, nor build the --become-password-file into the [cmd](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) variable.
### Issue Type
Bug Report
### Component Name
pull.py
### Ansible Version
```console
$ ansible --version
aansible --version
ansible [core 2.12.6]
config file = /Users/{user}/ansible.cfg
configured module search path = ['/Users/{user}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
ansible collection location = /Users/{user}/.ansible/collections:/usr/share/ansible/collections
executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible
python version = 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_FORKS(/Users/{user}/ansible.cfg) = 100
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/{user}/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/Users/{user}/ansible.cfg) = yaml
HOST_KEY_CHECKING(/Users/{user}/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
```
### OS / Environment
macOS Monterey 12.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
run the command: ```ansible-pull -i "$(hostname)," -U {repo_link} local.yml```
### Expected Results
I expect a successful playbook with an output of the current uptime.
### Actual Results
```console
TASK [shell] *******************************************************************
fatal: [{HOSTNAME}]: FAILED! => changed=false
module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78701
|
https://github.com/ansible/ansible/pull/82009
|
9ee603d1c520cfa4fb95a9fe5cfddcae6141c0ac
|
99e0d25857ad65764909c6ec701a04930ea5c21f
| 2022-09-02T21:03:48Z |
python
| 2023-10-24T13:20:25Z |
test/integration/targets/ansible-pull/pull-integration-test/conn_secret.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,701 |
ansible-pull does not utilize the --become-password-file
|
### Summary
When I try to use the --become-password-file flag in the command
```
> ansible-pull -i "$(hostname)," -U {repo_link} local.yml
```
I get the following error:
```
> module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
when attempting to use the playbook:
```
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
I believe this is a bug because the [pull.py](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) does not seem to utilize the [ask_passwords](https://github.com/ansible/ansible/blob/650befed37eadcaea735673638d5475fa957ca7e/lib/ansible/cli/__init__.py#L328) function, nor build the --become-password-file into the [cmd](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) variable.
### Issue Type
Bug Report
### Component Name
pull.py
### Ansible Version
```console
$ ansible --version
aansible --version
ansible [core 2.12.6]
config file = /Users/{user}/ansible.cfg
configured module search path = ['/Users/{user}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
ansible collection location = /Users/{user}/.ansible/collections:/usr/share/ansible/collections
executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible
python version = 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_FORKS(/Users/{user}/ansible.cfg) = 100
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/{user}/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/Users/{user}/ansible.cfg) = yaml
HOST_KEY_CHECKING(/Users/{user}/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
```
### OS / Environment
macOS Monterey 12.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
run the command: ```ansible-pull -i "$(hostname)," -U {repo_link} local.yml```
### Expected Results
I expect a successful playbook with an output of the current uptime.
### Actual Results
```console
TASK [shell] *******************************************************************
fatal: [{HOSTNAME}]: FAILED! => changed=false
module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78701
|
https://github.com/ansible/ansible/pull/82009
|
9ee603d1c520cfa4fb95a9fe5cfddcae6141c0ac
|
99e0d25857ad65764909c6ec701a04930ea5c21f
| 2022-09-02T21:03:48Z |
python
| 2023-10-24T13:20:25Z |
test/integration/targets/ansible-pull/pull-integration-test/secret_connection_password
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,701 |
ansible-pull does not utilize the --become-password-file
|
### Summary
When I try to use the --become-password-file flag in the command
```
> ansible-pull -i "$(hostname)," -U {repo_link} local.yml
```
I get the following error:
```
> module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
when attempting to use the playbook:
```
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
I believe this is a bug because the [pull.py](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) does not seem to utilize the [ask_passwords](https://github.com/ansible/ansible/blob/650befed37eadcaea735673638d5475fa957ca7e/lib/ansible/cli/__init__.py#L328) function, nor build the --become-password-file into the [cmd](https://github.com/ansible/ansible/blob/ad79c1e0d032eb5dda216055ffc393043de4b380/lib/ansible/cli/pull.py#L278) variable.
### Issue Type
Bug Report
### Component Name
pull.py
### Ansible Version
```console
$ ansible --version
aansible --version
ansible [core 2.12.6]
config file = /Users/{user}/ansible.cfg
configured module search path = ['/Users/{user}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ansible
ansible collection location = /Users/{user}/.ansible/collections:/usr/share/ansible/collections
executable location = /Library/Frameworks/Python.framework/Versions/3.8/bin/ansible
python version = 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_FORKS(/Users/{user}/ansible.cfg) = 100
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/{user}/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/Users/{user}/ansible.cfg) = yaml
HOST_KEY_CHECKING(/Users/{user}/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
```
### OS / Environment
macOS Monterey 12.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- shell:
cmd: uptime
register: uptime_res
become: true
- debug:
var: uptime_res
```
run the command: ```ansible-pull -i "$(hostname)," -U {repo_link} local.yml```
### Expected Results
I expect a successful playbook with an output of the current uptime.
### Actual Results
```console
TASK [shell] *******************************************************************
fatal: [{HOSTNAME}]: FAILED! => changed=false
module_stderr: |-
sudo: a password is required
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78701
|
https://github.com/ansible/ansible/pull/82009
|
9ee603d1c520cfa4fb95a9fe5cfddcae6141c0ac
|
99e0d25857ad65764909c6ec701a04930ea5c21f
| 2022-09-02T21:03:48Z |
python
| 2023-10-24T13:20:25Z |
test/integration/targets/ansible-pull/runme.sh
|
#!/usr/bin/env bash
set -eux
set -o pipefail
# http://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x
temp_dir=$(shell mktemp -d 2>/dev/null || mktemp -d -t 'ansible-testing-XXXXXXXXXX')
trap 'rm -rf "${temp_dir}"' EXIT
repo_dir="${temp_dir}/repo"
pull_dir="${temp_dir}/pull"
temp_log="${temp_dir}/pull.log"
ansible-playbook setup.yml -i ../../inventory
cleanup="$(pwd)/cleanup.yml"
trap 'ansible-playbook "${cleanup}" -i ../../inventory' EXIT
cp -av "pull-integration-test" "${repo_dir}"
cd "${repo_dir}"
(
git init
git config user.email "[email protected]"
git config user.name "Ansible Test Runner"
git add .
git commit -m "Initial commit."
)
function pass_tests {
# test for https://github.com/ansible/ansible/issues/13688
if ! grep MAGICKEYWORD "${temp_log}"; then
cat "${temp_log}"
echo "Missing MAGICKEYWORD in output."
exit 1
fi
# test for https://github.com/ansible/ansible/issues/13681
if grep -E '127\.0\.0\.1.*ok' "${temp_log}"; then
cat "${temp_log}"
echo "Found host 127.0.0.1 in output. Only localhost should be present."
exit 1
fi
# make sure one host was run
if ! grep -E 'localhost.*ok' "${temp_log}"; then
cat "${temp_log}"
echo "Did not find host localhost in output."
exit 1
fi
}
function pass_tests_multi {
# test for https://github.com/ansible/ansible/issues/72708
if ! grep 'test multi_play_1' "${temp_log}"; then
cat "${temp_log}"
echo "Did not run multiple playbooks"
exit 1
fi
if ! grep 'test multi_play_2' "${temp_log}"; then
cat "${temp_log}"
echo "Did not run multiple playbooks"
exit 1
fi
}
export ANSIBLE_INVENTORY
export ANSIBLE_HOST_PATTERN_MISMATCH
unset ANSIBLE_INVENTORY
unset ANSIBLE_HOST_PATTERN_MISMATCH
ANSIBLE_CONFIG='' ansible-pull -d "${pull_dir}" -U "${repo_dir}" "$@" | tee "${temp_log}"
pass_tests
# ensure complex extra vars work
PASSWORD='test'
USER=${USER:-'broken_docker'}
JSON_EXTRA_ARGS='{"docker_registries_login": [{ "docker_password": "'"${PASSWORD}"'", "docker_username": "'"${USER}"'", "docker_registry_url":"repository-manager.company.com:5001"}], "docker_registries_logout": [{ "docker_password": "'"${PASSWORD}"'", "docker_username": "'"${USER}"'", "docker_registry_url":"repository-manager.company.com:5001"}] }'
ANSIBLE_CONFIG='' ansible-pull -d "${pull_dir}" -U "${repo_dir}" -e "${JSON_EXTRA_ARGS}" "$@" --tags untagged,test_ev | tee "${temp_log}"
pass_tests
ANSIBLE_CONFIG='' ansible-pull -d "${pull_dir}" -U "${repo_dir}" "$@" multi_play_1.yml multi_play_2.yml | tee "${temp_log}"
pass_tests_multi
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
changelogs/fragments/any_errors_fatal-fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import annotations
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils.common.text.converters import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, dummy in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action in C._ACTION_META and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
if isinstance(task, Handler):
if run_once:
task.clear_hosts()
else:
task.remove_host(host)
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, dummy) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/31543.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/36308.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/73246.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/80981.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/runme.sh
|
#!/usr/bin/env bash
set -ux
ansible-playbook -i inventory "$@" play_level.yml| tee out.txt | grep 'any_errors_fatal_play_level_post_fail'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
ansible-playbook -i inventory "$@" on_includes.yml | tee out.txt | grep 'any_errors_fatal_this_should_never_be_reached'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
set -ux
ansible-playbook -i inventory "$@" always_block.yml | tee out.txt | grep 'any_errors_fatal_always_block_start'
res=$?
cat out.txt
if [ "${res}" -ne 0 ] ; then
exit 1
fi
set -ux
for test_name in test_include_role test_include_tasks; do
ansible-playbook -i inventory "$@" -e test_name=$test_name 50897.yml | tee out.txt | grep 'any_errors_fatal_this_should_never_be_reached'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/force_handlers_blocks_81533-1.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/force_handlers_blocks_81533-2.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,533 |
Block with run_once leads to no more hosts error on failure earlier of first host
|
### Summary
If `run_once` is used on the block level and the first host in the play has failed (before the block is reached), then only the first tasks of the block gets executed.
After the first task of the block is done the play ends with the error: NO MORE HOSTS LEFT
Observations:
* run_once on single task functions as expected
* if another host fails than the first host the block with run_once functions as expected
* lowering the forks does not work around the issue
* the problem still happens if the block gets included (using include_tasks or include_role) after the first host has failed
This was first noticed in Ansible 2.15.0.
In previous Ansible versions 2.12.5 and 2.9.10 this functions as expected.
### Issue Type
Bug Report
### Component Name
blocks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/user/git/ansible-galaxy/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/venv_3.9/lib/python3.9/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv_3.9/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/user/venv_3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/user/git/ansible-galaxy/ansible.cfg) = True
CACHE_PLUGIN(/home/user/git/ansible-galaxy/ansible.cfg) = memory
COLOR_CHANGED(/home/user/git/ansible-galaxy/ansible.cfg) = yellow
COLOR_DEBUG(/home/user/git/ansible-galaxy/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/user/git/ansible-galaxy/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_DIFF_LINES(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_ERROR(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/user/git/ansible-galaxy/ansible.cfg) = white
COLOR_OK(/home/user/git/ansible-galaxy/ansible.cfg) = green
COLOR_SKIP(/home/user/git/ansible-galaxy/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/user/git/ansible-galaxy/ansible.cfg) = red
COLOR_VERBOSE(/home/user/git/ansible-galaxy/ansible.cfg) = blue
COLOR_WARN(/home/user/git/ansible-galaxy/ansible.cfg) = bright purple
CONFIG_FILE() = /home/user/git/ansible-galaxy/ansible.cfg
DEFAULT_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/user/git/ansible-galaxy/ansible.cfg) = 'sudo'
DEFAULT_BECOME_USER(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
DEFAULT_FORCE_HANDLERS(/home/user/git/ansible-galaxy/ansible.cfg) = True
DEFAULT_FORKS(/home/user/git/ansible-galaxy/ansible.cfg) = 40
DEFAULT_GATHERING(/home/user/git/ansible-galaxy/ansible.cfg) = implicit
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/git/ansible-galaxy/ansible.cfg) = False
DEFAULT_MANAGED_STR(/home/user/git/ansible-galaxy/ansible.cfg) = %Y-%m-%d %H:%M
DEFAULT_MODULE_COMPRESSION(/home/user/git/ansible-galaxy/ansible.cfg) = 'ZIP_DEFLATED'
DEFAULT_MODULE_NAME(/home/user/git/ansible-galaxy/ansible.cfg) = command
DEFAULT_POLL_INTERVAL(/home/user/git/ansible-galaxy/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/user/git/ansible-galaxy/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/user/git/ansible-galaxy/ansible.cfg) = user
DEFAULT_ROLES_PATH(/home/user/git/ansible-galaxy/ansible.cfg) = ['/home/user/git/ansible-galaxy/roles', '/home/user/git/ansible-galaxy/galaxy']
DEFAULT_TIMEOUT(/home/user/git/ansible-galaxy/ansible.cfg) = 20
DEFAULT_TRANSPORT(/home/user/git/ansible-galaxy/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/user/git/ansible-galaxy/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/home/user/git/ansible-galaxy/ansible.cfg) = 1048576
RETRY_FILES_ENABLED(/home/user/git/ansible-galaxy/ansible.cfg) = False
SHOW_CUSTOM_STATS(/home/user/git/ansible-galaxy/ansible.cfg) = True
SYSTEM_WARNINGS(/home/user/git/ansible-galaxy/ansible.cfg) = True
BECOME:
======
runas:
_____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
su:
__
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
sudo:
____
become_user(/home/user/git/ansible-galaxy/ansible.cfg) = 'root'
CALLBACK:
========
default:
_______
show_custom_stats(/home/user/git/ansible-galaxy/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
pty(/home/user/git/ansible-galaxy/ansible.cfg) = False
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
ssh:
___
control_path(/home/user/git/ansible-galaxy/ansible.cfg) = %(directory)s/ansi-%%h-%%p-%%r
host_key_checking(/home/user/git/ansible-galaxy/ansible.cfg) = False
pipelining(/home/user/git/ansible-galaxy/ansible.cfg) = True
port(/home/user/git/ansible-galaxy/ansible.cfg) = 22
remote_user(/home/user/git/ansible-galaxy/ansible.cfg) = user
scp_if_ssh(/home/user/git/ansible-galaxy/ansible.cfg) = False
sftp_batch_mode(/home/user/git/ansible-galaxy/ansible.cfg) = False
ssh_args(/home/user/git/ansible-galaxy/ansible.cfg) = -o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
timeout(/home/user/git/ansible-galaxy/ansible.cfg) = 20
SHELL:
=====
sh:
__
remote_tmp(/home/user/git/ansible-galaxy/ansible.cfg) = $HOME/.ansible/tmp
world_readable_temp(/home/user/git/ansible-galaxy/ansible.cfg) = False
```
### OS / Environment
RHEL7/8/9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Demo run_once block no more hosts error
become: false
gather_facts: false
hosts: all
tasks:
- name: Task pre
debug:
msg: "debug all nodes"
- name: Fail first host
fail:
when: inventory_hostname == 'host1'
- name: Block run_once
run_once: true
block:
- name: Block debug 1
debug:
msg: "debug run_once task 1"
- name: Block debug 2
debug:
msg: "debug run_once task 2"
- name: Task post
debug:
msg: "debug remaining hosts"
```
### Expected Results
All the tasks of the block should run using the `run_once` functionality on the first available host that has not failed (host2 in this case.
After the block is done without failures the play should continue.
```console
# Ansible 2.12.5
ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
TASK [Block debug 2] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 2"
}
TASK [Task post] **************************************************************************************
ok: [host2] => {
"msg": "debug remaining hosts"
}
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
# Ansible 2.15.3
$ ansible-playbook test.yml -i inventory.yml -l host1,host2 -D
PLAY [Demo run_once block no more hosts error] ********************************************************
TASK [Task pre] ***************************************************************************************
ok: [host1] => {
"msg": "debug all nodes"
}
ok: [host2] => {
"msg": "debug all nodes"
}
TASK [Fail first host] ********************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed as requested from task"}
skipping: [host2]
TASK [Block debug 1] **********************************************************************************
ok: [host2] => {
"msg": "debug run_once task 1"
}
NO MORE HOSTS LEFT ************************************************************************************
PLAY RECAP ********************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81533
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-08-17T15:13:47Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_listen_role_dedup.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'a handler from a role')" = "1" ]
ansible localhost -m include_role -a "name=r1-dep_chain-vars" "$@"
ansible-playbook test_include_tasks_in_include_role.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_run_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran once')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
changelogs/fragments/any_errors_fatal-fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import annotations
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils.common.text.converters import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, dummy in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action in C._ACTION_META and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
if isinstance(task, Handler):
if run_once:
task.clear_hosts()
else:
task.remove_host(host)
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, dummy) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/31543.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/36308.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/73246.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/80981.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/runme.sh
|
#!/usr/bin/env bash
set -ux
ansible-playbook -i inventory "$@" play_level.yml| tee out.txt | grep 'any_errors_fatal_play_level_post_fail'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
ansible-playbook -i inventory "$@" on_includes.yml | tee out.txt | grep 'any_errors_fatal_this_should_never_be_reached'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
set -ux
ansible-playbook -i inventory "$@" always_block.yml | tee out.txt | grep 'any_errors_fatal_always_block_start'
res=$?
cat out.txt
if [ "${res}" -ne 0 ] ; then
exit 1
fi
set -ux
for test_name in test_include_role test_include_tasks; do
ansible-playbook -i inventory "$@" -e test_name=$test_name 50897.yml | tee out.txt | grep 'any_errors_fatal_this_should_never_be_reached'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/force_handlers_blocks_81533-1.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/force_handlers_blocks_81533-2.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,981 |
any_errors_fatal doesn't work then using roles with block / rescue
|
### Summary
I have a problem with `any_errors_fatal: True` command. In plain playbook it seems to work by default, but it doesn't work when i'm using roles with block / rescue. I'm doing some assert statements and after one of assert statements fails, the playbook still continues with other hosts in the same play. I expect that it would stop after experiencing any fatal error (any assert statement fail).
Even Ansible version is not the newest, but the same behaviour is on Ansible 2.15
### Issue Type
Bug Report
### Component Name
any_errors_fatal with rescue / block
### Ansible Version
```console
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
root@PCEDVKAIL3:/etc/ansible# ansible-config dump --only-changed -t all
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
root@PCEDVKAIL3:/etc/ansible#
```
### OS / Environment
Centos 7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
**Roles Playbook without block/rescue:**
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block commented):
---
# - name: Run test pbook
# block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
# rescue:
# - debug:
# msg: Rescue
```
Runs as expected and play exists after fatal error in assert:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
**Now how it works with block/rescue - seems like bug**
same playbook:
```
- hosts: TEONET01A,TEONET01B
gather_facts: false
any_errors_fatal: True
roles:
- test
Roles (block uncommented):
---
- name: Run test pbook
block:
- assert:
that:
- ansible_hostname == 'TEONET01A'
- debug:
msg: In role
rescue:
- debug:
msg: Rescue
```
Runs as not expected and play continues with other host even the host assert fails:
```
root@PCEDVKAIL3:/etc/ansible# ansible-playbook playbook.yml
PLAY [TEONET01A,TEONET01B] ***************************************************************************************************************************************************************************************************************************************************************
TASK [test : assert] *********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
ok: [10.8.250.46] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
fatal: [10.8.250.47]: FAILED! => {
"assertion": "ansible_hostname == 'TEONET01A'",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.46] => {
"msg": "In role"
}
TASK [test : debug] **********************************************************************************************************************************************************************************************************************************************************************
ok: [10.8.250.47] => {
"msg": "Rescue"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************
10.8.250.46 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.8.250.47 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
```
### Expected Results
playbook should stop when any of the hosts experiences error.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80981
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2023-06-06T20:32:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_listen_role_dedup.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'a handler from a role')" = "1" ]
ansible localhost -m include_role -a "name=r1-dep_chain-vars" "$@"
ansible-playbook test_include_tasks_in_include_role.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_run_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran once')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
changelogs/fragments/any_errors_fatal-fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import annotations
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils.common.text.converters import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, dummy in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action in C._ACTION_META and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
if isinstance(task, Handler):
if run_once:
task.clear_hosts()
else:
task.remove_host(host)
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, dummy) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/31543.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/36308.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/73246.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/80981.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/any_errors_fatal/runme.sh
|
#!/usr/bin/env bash
set -ux
ansible-playbook -i inventory "$@" play_level.yml| tee out.txt | grep 'any_errors_fatal_play_level_post_fail'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
ansible-playbook -i inventory "$@" on_includes.yml | tee out.txt | grep 'any_errors_fatal_this_should_never_be_reached'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
set -ux
ansible-playbook -i inventory "$@" always_block.yml | tee out.txt | grep 'any_errors_fatal_always_block_start'
res=$?
cat out.txt
if [ "${res}" -ne 0 ] ; then
exit 1
fi
set -ux
for test_name in test_include_role test_include_tasks; do
ansible-playbook -i inventory "$@" -e test_name=$test_name 50897.yml | tee out.txt | grep 'any_errors_fatal_this_should_never_be_reached'
res=$?
cat out.txt
if [ "${res}" -eq 0 ] ; then
exit 1
fi
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/force_handlers_blocks_81533-1.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/force_handlers_blocks_81533-2.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,246 |
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In some cases of nesting `block`s/`import_tasks`' the `fail` module interrupts the Ansible execution and disregards `always` blocks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block import_tasks
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.15
config file = <...hidden...>/ansible.cfg
configured module search path = [u'/home/XXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/XXX/venv/lib/python2.7/site-packages/ansible
executable location = /home/XXX/venv/bin/ansible
python version = 2.7.18rc1 (default, Apr 7 2020, 12:05:55) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(<...hidden...>/ansible.cfg) = True
ANY_ERRORS_FATAL(<...hidden...>/ansible.cfg) = True
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Create a following file structure:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
#- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
First, failure message is printed.
Then, debug "Second" is printed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Failure message is printed.
Debug "second" is **not** printed.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ADDITIONAL INFORMATION
It seems to appear only if `block` is a first element in `always` dictionary. For example in such case:
File: play.yml
```yaml
- hosts: all
gather_facts: no
tasks:
- block:
- import_tasks: tasks.yml
always:
- debug: msg="First"
- block:
- debug: msg="Second"
```
File: tasks.yml
```yaml
- fail: msg="Fail as expected"
```
Everything works as expected and both outer debug and inner debug get's printed:
```
$ ansible-playbook -i localhost, -c local ../../play.yml
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [fail] *******************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fail as expected"}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "First"
}
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Second"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/73246
|
https://github.com/ansible/ansible/pull/78680
|
c827dc0dabff8850a73de9ca65148a74899767f2
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
| 2021-01-15T13:05:10Z |
python
| 2023-10-25T07:42:13Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_listen_role_dedup.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'a handler from a role')" = "1" ]
ansible localhost -m include_role -a "name=r1-dep_chain-vars" "$@"
ansible-playbook test_include_tasks_in_include_role.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_run_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran once')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,213 |
System language dependent parsing error within git module
|
### Summary
I just updated to ansible core `2.12.3` via pip from ansible `2.11.9` and was confronted with weird issues during one of my plays, which tries to ensure a proper local git branch. It complained about a parsing error while navigating a git submodule path. After looking and poking around a little I found [this line of code](https://github.com/ansible/ansible/blob/0c4c18bc04c562755a368df67fce943ca15418ee/lib/ansible/modules/git.py#L546) beeing responsible for the parsing issue, as my machine uses de_DE as locale. This changes the git output from the expected `Entering <path>` to `Betrete <path>` and makes the parser fail. I did not perform a system update along to the ansible upgrade, I am sure it was working with the older ansible version (as I successfully rolled back) and the update caused the break. I see two issues here. First the parser expects english output, which might not be always provided. I tried to think about a solution, but the response of the actually invoked command `git submodule foreach` leaves no room for alternative methods of recognition. Secondly ansible seems to have used some sort of private environment where the locale must have been set to english somehow. Otherwise my play should never have worked at all. Regarding the latter case I really wonder what broke in the recent update.
### Issue Type
Bug Report
### Component Name
git
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.3]
NOTE: More informations no longer available as I downgraded already to get back to a working state again.
```
### Configuration
```console
no output
```
### OS / Environment
Ubuntu 20.04.4 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: false
any_errors_fatal: true
run_once: true # Required as this runs locally
tasks:
- name: Perform pre-run checks
connection: local
block:
# Check if local repository is on most recent commit
- name: Checkout devops branch '{{ git_config.branch }}', force restart (abort play) if branch had to be changed
ansible.builtin.git:
clone: false # Do never clone, this should be run from within collection
dest: "{{ git_config.path }}"
repo: "{{ git_config.repo }}"
version: "{{ git_config.branch }}"
register: git_status
failed_when: git_status.failed or git_status.changed # Module failure (e.g. dirty) or branch was changed/pulled
```
As long as an actual repository containing at least one submodule is given, the error will be triggered if the machine is set to a language different than english.
### Expected Results
I expected the module to ensure I am on the proper branch.
### Actual Results
```console
The module outputs the following error (abbreviated to prevent cluttering):
Unable to parse submodule hash line: Betrete 'provisioning/roles/ansible-redis'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77213
|
https://github.com/ansible/ansible/pull/81931
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
|
b4920c83adf959c5bd1b6b9157ce67a858c9d4db
| 2022-03-05T16:47:49Z |
python
| 2023-10-25T13:47:55Z |
changelogs/fragments/81931-locale-related-parsing-error-git.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,213 |
System language dependent parsing error within git module
|
### Summary
I just updated to ansible core `2.12.3` via pip from ansible `2.11.9` and was confronted with weird issues during one of my plays, which tries to ensure a proper local git branch. It complained about a parsing error while navigating a git submodule path. After looking and poking around a little I found [this line of code](https://github.com/ansible/ansible/blob/0c4c18bc04c562755a368df67fce943ca15418ee/lib/ansible/modules/git.py#L546) beeing responsible for the parsing issue, as my machine uses de_DE as locale. This changes the git output from the expected `Entering <path>` to `Betrete <path>` and makes the parser fail. I did not perform a system update along to the ansible upgrade, I am sure it was working with the older ansible version (as I successfully rolled back) and the update caused the break. I see two issues here. First the parser expects english output, which might not be always provided. I tried to think about a solution, but the response of the actually invoked command `git submodule foreach` leaves no room for alternative methods of recognition. Secondly ansible seems to have used some sort of private environment where the locale must have been set to english somehow. Otherwise my play should never have worked at all. Regarding the latter case I really wonder what broke in the recent update.
### Issue Type
Bug Report
### Component Name
git
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.3]
NOTE: More informations no longer available as I downgraded already to get back to a working state again.
```
### Configuration
```console
no output
```
### OS / Environment
Ubuntu 20.04.4 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: false
any_errors_fatal: true
run_once: true # Required as this runs locally
tasks:
- name: Perform pre-run checks
connection: local
block:
# Check if local repository is on most recent commit
- name: Checkout devops branch '{{ git_config.branch }}', force restart (abort play) if branch had to be changed
ansible.builtin.git:
clone: false # Do never clone, this should be run from within collection
dest: "{{ git_config.path }}"
repo: "{{ git_config.repo }}"
version: "{{ git_config.branch }}"
register: git_status
failed_when: git_status.failed or git_status.changed # Module failure (e.g. dirty) or branch was changed/pulled
```
As long as an actual repository containing at least one submodule is given, the error will be triggered if the machine is set to a language different than english.
### Expected Results
I expected the module to ensure I am on the proper branch.
### Actual Results
```console
The module outputs the following error (abbreviated to prevent cluttering):
Unable to parse submodule hash line: Betrete 'provisioning/roles/ansible-redis'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77213
|
https://github.com/ansible/ansible/pull/81931
|
fe94a99aa291d129aa6432e5d50e7117d9c6aae3
|
b4920c83adf959c5bd1b6b9157ce67a858c9d4db
| 2022-03-05T16:47:49Z |
python
| 2023-10-25T13:47:55Z |
lib/ansible/modules/git.py
|
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
DOCUMENTATION = '''
---
module: git
author:
- "Ansible Core Team"
- "Michael DeHaan"
version_added: "0.0.1"
short_description: Deploy software (or files) from git checkouts
description:
- Manage I(git) checkouts of repositories to deploy files or software.
extends_documentation_fragment: action_common_attributes
options:
repo:
description:
- git, SSH, or HTTP(S) protocol address of the git repository.
type: str
required: true
aliases: [ name ]
dest:
description:
- The path of where the repository should be checked out. This
is equivalent to C(git clone [repo_url] [directory]). The repository
named in O(repo) is not appended to this path and the destination directory must be empty. This
parameter is required, unless O(clone) is set to V(false).
type: path
required: true
version:
description:
- What version of the repository to check out. This can be
the literal string V(HEAD), a branch name, a tag name.
It can also be a I(SHA-1) hash, in which case O(refspec) needs
to be specified if the given revision is not already available.
type: str
default: "HEAD"
accept_hostkey:
description:
- Will ensure or not that "-o StrictHostKeyChecking=no" is present as an ssh option.
- Be aware that this disables a protection against MITM attacks.
- Those using OpenSSH >= 7.5 might want to set O(ssh_opts) to V(StrictHostKeyChecking=accept-new)
instead, it does not remove the MITM issue but it does restrict it to the first attempt.
type: bool
default: 'no'
version_added: "1.5"
accept_newhostkey:
description:
- As of OpenSSH 7.5, "-o StrictHostKeyChecking=accept-new" can be
used which is safer and will only accepts host keys which are
not present or are the same. if V(true), ensure that
"-o StrictHostKeyChecking=accept-new" is present as an ssh option.
type: bool
default: 'no'
version_added: "2.12"
ssh_opts:
description:
- Options git will pass to ssh when used as protocol, it works via C(git)'s
E(GIT_SSH)/E(GIT_SSH_COMMAND) environment variables.
- For older versions it appends E(GIT_SSH_OPTS) (specific to this module) to the
variables above or via a wrapper script.
- Other options can add to this list, like O(key_file) and O(accept_hostkey).
- An example value could be "-o StrictHostKeyChecking=no" (although this particular
option is better set by O(accept_hostkey)).
- The module ensures that 'BatchMode=yes' is always present to avoid prompts.
type: str
version_added: "1.5"
key_file:
description:
- Specify an optional private key file path, on the target host, to use for the checkout.
- This ensures 'IdentitiesOnly=yes' is present in O(ssh_opts).
type: path
version_added: "1.5"
reference:
description:
- Reference repository (see "git clone --reference ...").
type: str
version_added: "1.4"
remote:
description:
- Name of the remote.
type: str
default: "origin"
refspec:
description:
- Add an additional refspec to be fetched.
If version is set to a I(SHA-1) not reachable from any branch
or tag, this option may be necessary to specify the ref containing
the I(SHA-1).
Uses the same syntax as the C(git fetch) command.
An example value could be "refs/meta/config".
type: str
version_added: "1.9"
force:
description:
- If V(true), any modified files in the working
repository will be discarded. Prior to 0.7, this was always
V(true) and could not be disabled. Prior to 1.9, the default was
V(true).
type: bool
default: 'no'
version_added: "0.7"
depth:
description:
- Create a shallow clone with a history truncated to the specified
number or revisions. The minimum possible value is V(1), otherwise
ignored. Needs I(git>=1.9.1) to work correctly.
type: int
version_added: "1.2"
clone:
description:
- If V(false), do not clone the repository even if it does not exist locally.
type: bool
default: 'yes'
version_added: "1.9"
update:
description:
- If V(false), do not retrieve new revisions from the origin repository.
- Operations like archive will work on the existing (old) repository and might
not respond to changes to the options version or remote.
type: bool
default: 'yes'
version_added: "1.2"
executable:
description:
- Path to git executable to use. If not supplied,
the normal mechanism for resolving binary paths will be used.
type: path
version_added: "1.4"
bare:
description:
- If V(true), repository will be created as a bare repo, otherwise
it will be a standard repo with a workspace.
type: bool
default: 'no'
version_added: "1.4"
umask:
description:
- The umask to set before doing any checkouts, or any other
repository maintenance.
type: raw
version_added: "2.2"
recursive:
description:
- If V(false), repository will be cloned without the C(--recursive)
option, skipping sub-modules.
type: bool
default: 'yes'
version_added: "1.6"
single_branch:
description:
- Clone only the history leading to the tip of the specified revision.
type: bool
default: 'no'
version_added: '2.11'
track_submodules:
description:
- If V(true), submodules will track the latest commit on their
master branch (or other branch specified in .gitmodules). If
V(false), submodules will be kept at the revision specified by the
main project. This is equivalent to specifying the C(--remote) flag
to git submodule update.
type: bool
default: 'no'
version_added: "1.8"
verify_commit:
description:
- If V(true), when cloning or checking out a O(version) verify the
signature of a GPG signed commit. This requires git version>=2.1.0
to be installed. The commit MUST be signed and the public key MUST
be present in the GPG keyring.
type: bool
default: 'no'
version_added: "2.0"
archive:
description:
- Specify archive file path with extension. If specified, creates an
archive file of the specified format containing the tree structure
for the source tree.
Allowed archive formats ["zip", "tar.gz", "tar", "tgz"].
- This will clone and perform git archive from local directory as not
all git servers support git archive.
type: path
version_added: "2.4"
archive_prefix:
description:
- Specify a prefix to add to each file path in archive. Requires O(archive) to be specified.
version_added: "2.10"
type: str
separate_git_dir:
description:
- The path to place the cloned repository. If specified, Git repository
can be separated from working tree.
type: path
version_added: "2.7"
gpg_whitelist:
description:
- A list of trusted GPG fingerprints to compare to the fingerprint of the
GPG-signed commit.
- Only used when O(verify_commit=yes).
- Use of this feature requires Git 2.6+ due to its reliance on git's C(--raw) flag to C(verify-commit) and C(verify-tag).
type: list
elements: str
default: []
version_added: "2.9"
requirements:
- git>=1.7.1 (the command line tool)
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
notes:
- "If the task seems to be hanging, first verify remote host is in C(known_hosts).
SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt,
one solution is to use the option accept_hostkey. Another solution is to
add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling
the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts."
'''
EXAMPLES = '''
- name: Git checkout
ansible.builtin.git:
repo: 'https://foosball.example.org/path/to/repo.git'
dest: /srv/checkout
version: release-0.22
- name: Read-write git checkout from github
ansible.builtin.git:
repo: [email protected]:mylogin/hello.git
dest: /home/mylogin/hello
- name: Just ensuring the repo checkout exists
ansible.builtin.git:
repo: 'https://foosball.example.org/path/to/repo.git'
dest: /srv/checkout
update: no
- name: Just get information about the repository whether or not it has already been cloned locally
ansible.builtin.git:
repo: 'https://foosball.example.org/path/to/repo.git'
dest: /srv/checkout
clone: no
update: no
- name: Checkout a github repo and use refspec to fetch all pull requests
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
refspec: '+refs/pull/*:refs/heads/*'
- name: Create git archive from repo
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
archive: /tmp/ansible-examples.zip
- name: Clone a repo with separate git directory
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
separate_git_dir: /src/ansible-examples.git
- name: Example clone of a single branch
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
single_branch: yes
version: master
- name: Avoid hanging when http(s) password is missing
ansible.builtin.git:
repo: https://github.com/ansible/could-be-a-private-repo
dest: /src/from-private-repo
environment:
GIT_TERMINAL_PROMPT: 0 # reports "terminal prompts disabled" on missing password
# or GIT_ASKPASS: /bin/true # for git before version 2.3.0, reports "Authentication failed" on missing password
'''
RETURN = '''
after:
description: Last commit revision of the repository retrieved during the update.
returned: success
type: str
sample: 4c020102a9cd6fe908c9a4a326a38f972f63a903
before:
description: Commit revision before the repository was updated, "null" for new repository.
returned: success
type: str
sample: 67c04ebe40a003bda0efb34eacfb93b0cafdf628
remote_url_changed:
description: Contains True or False whether or not the remote URL was changed.
returned: success
type: bool
sample: True
warnings:
description: List of warnings if requested features were not available due to a too old git version.
returned: error
type: str
sample: git version is too old to fully support the depth argument. Falling back to full checkouts.
git_dir_now:
description: Contains the new path of .git directory if it is changed.
returned: success
type: str
sample: /path/to/new/git/dir
git_dir_before:
description: Contains the original path of .git directory if it is changed.
returned: success
type: str
sample: /path/to/old/git/dir
'''
import filecmp
import os
import re
import shlex
import stat
import sys
import shutil
import tempfile
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.six import b, string_types
def relocate_repo(module, result, repo_dir, old_repo_dir, worktree_dir):
if os.path.exists(repo_dir):
module.fail_json(msg='Separate-git-dir path %s already exists.' % repo_dir)
if worktree_dir:
dot_git_file_path = os.path.join(worktree_dir, '.git')
try:
shutil.move(old_repo_dir, repo_dir)
with open(dot_git_file_path, 'w') as dot_git_file:
dot_git_file.write('gitdir: %s' % repo_dir)
result['git_dir_before'] = old_repo_dir
result['git_dir_now'] = repo_dir
except (IOError, OSError) as err:
# if we already moved the .git dir, roll it back
if os.path.exists(repo_dir):
shutil.move(repo_dir, old_repo_dir)
module.fail_json(msg=u'Unable to move git dir. %s' % to_text(err))
def head_splitter(headfile, remote, module=None, fail_on_error=False):
'''Extract the head reference'''
# https://github.com/ansible/ansible-modules-core/pull/907
res = None
if os.path.exists(headfile):
rawdata = None
try:
f = open(headfile, 'r')
rawdata = f.readline()
f.close()
except Exception:
if fail_on_error and module:
module.fail_json(msg="Unable to read %s" % headfile)
if rawdata:
try:
rawdata = rawdata.replace('refs/remotes/%s' % remote, '', 1)
refparts = rawdata.split(' ')
newref = refparts[-1]
nrefparts = newref.split('/', 2)
res = nrefparts[-1].rstrip('\n')
except Exception:
if fail_on_error and module:
module.fail_json(msg="Unable to split head from '%s'" % rawdata)
return res
def unfrackgitpath(path):
if path is None:
return None
# copied from ansible.utils.path
return os.path.normpath(os.path.realpath(os.path.expanduser(os.path.expandvars(path))))
def get_submodule_update_params(module, git_path, cwd):
# or: git submodule [--quiet] update [--init] [-N|--no-fetch]
# [-f|--force] [--rebase] [--reference <repository>] [--merge]
# [--recursive] [--] [<path>...]
params = []
# run a bad submodule command to get valid params
cmd = "%s submodule update --help" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=cwd)
lines = stderr.split('\n')
update_line = None
for line in lines:
if 'git submodule [--quiet] update ' in line:
update_line = line
if update_line:
update_line = update_line.replace('[', '')
update_line = update_line.replace(']', '')
update_line = update_line.replace('|', ' ')
parts = shlex.split(update_line)
for part in parts:
if part.startswith('--'):
part = part.replace('--', '')
params.append(part)
return params
def write_ssh_wrapper(module):
'''
This writes an shell wrapper for ssh options to be used with git
this is only relevant for older versions of gitthat cannot
handle the options themselves. Returns path to the script
'''
try:
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
if os.access(module.tmpdir, os.W_OK | os.R_OK | os.X_OK):
fd, wrapper_path = tempfile.mkstemp(prefix=module.tmpdir + '/')
else:
raise OSError
except (IOError, OSError):
fd, wrapper_path = tempfile.mkstemp()
# use existing git_ssh/ssh_command, fallback to 'ssh'
template = b("""#!/bin/sh
%s $GIT_SSH_OPTS "$@"
""" % os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh')))
# write it
with os.fdopen(fd, 'w+b') as fh:
fh.write(template)
# set execute
st = os.stat(wrapper_path)
os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
module.debug('Wrote temp git ssh wrapper (%s): %s' % (wrapper_path, template))
# ensure we cleanup after ourselves
module.add_cleanup_file(path=wrapper_path)
return wrapper_path
def set_git_ssh_env(key_file, ssh_opts, git_version, module):
'''
use environment variables to configure git's ssh execution,
which varies by version but this functino should handle all.
'''
# initialise to existing ssh opts and/or append user provided
if ssh_opts is None:
ssh_opts = os.environ.get('GIT_SSH_OPTS', '')
else:
ssh_opts = os.environ.get('GIT_SSH_OPTS', '') + ' ' + ssh_opts
# hostkey acceptance
accept_key = "StrictHostKeyChecking=no"
if module.params['accept_hostkey'] and accept_key not in ssh_opts:
ssh_opts += " -o %s" % accept_key
# avoid prompts
force_batch = 'BatchMode=yes'
if force_batch not in ssh_opts:
ssh_opts += ' -o %s' % (force_batch)
# deal with key file
if key_file:
key_opt = '-i %s' % key_file
if key_opt not in ssh_opts:
ssh_opts += ' %s' % key_opt
ikey = 'IdentitiesOnly=yes'
if ikey not in ssh_opts:
ssh_opts += ' -o %s' % ikey
# older than 2.3 does not know how to use git_ssh_command,
# so we force it into get_ssh var
# https://github.com/gitster/git/commit/09d60d785c68c8fa65094ecbe46fbc2a38d0fc1f
if git_version < LooseVersion('2.3.0'):
# for use in wrapper
os.environ["GIT_SSH_OPTS"] = ssh_opts
# these versions don't support GIT_SSH_OPTS so have to write wrapper
wrapper = write_ssh_wrapper(module)
# force use of git_ssh_opts via wrapper, git_ssh cannot not handle arguments
os.environ['GIT_SSH'] = wrapper
else:
# we construct full finalized command string here
full_cmd = os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh'))
if ssh_opts:
full_cmd += ' ' + ssh_opts
# git_ssh_command can handle arguments to ssh
os.environ["GIT_SSH_COMMAND"] = full_cmd
def get_version(module, git_path, dest, ref="HEAD"):
''' samples the version of the git repo '''
cmd = "%s rev-parse %s" % (git_path, ref)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
sha = to_native(stdout).rstrip('\n')
return sha
def ssh_supports_acceptnewhostkey(module):
try:
ssh_path = get_bin_path('ssh')
except ValueError as err:
module.fail_json(
msg='Remote host is missing ssh command, so you cannot '
'use acceptnewhostkey option.', details=to_text(err))
supports_acceptnewhostkey = True
cmd = [ssh_path, '-o', 'StrictHostKeyChecking=accept-new', '-V']
rc, stdout, stderr = module.run_command(cmd)
if rc != 0:
supports_acceptnewhostkey = False
return supports_acceptnewhostkey
def get_submodule_versions(git_path, module, dest, version='HEAD'):
cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(
msg='Unable to determine hashes of submodules',
stdout=out,
stderr=err,
rc=rc)
submodules = {}
subm_name = None
for line in out.splitlines():
if line.startswith("Entering '"):
subm_name = line[10:-1]
elif len(line.strip()) == 40:
if subm_name is None:
module.fail_json()
submodules[subm_name] = line.strip()
subm_name = None
else:
module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())
if subm_name is not None:
module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)
return submodules
def clone(git_path, module, repo, dest, remote, depth, version, bare,
reference, refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch):
''' makes a new git repo if it does not already exist '''
dest_dirname = os.path.dirname(dest)
try:
os.makedirs(dest_dirname)
except Exception:
pass
cmd = [git_path, 'clone']
if bare:
cmd.append('--bare')
else:
cmd.extend(['--origin', remote])
is_branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) or is_remote_tag(git_path, module, dest, repo, version)
if depth:
if version == 'HEAD' or refspec:
cmd.extend(['--depth', str(depth)])
elif is_branch_or_tag:
cmd.extend(['--depth', str(depth)])
cmd.extend(['--branch', version])
else:
# only use depth if the remote object is branch or tag (i.e. fetchable)
module.warn("Ignoring depth argument. "
"Shallow clones are only available for "
"HEAD, branches, tags or in combination with refspec.")
if reference:
cmd.extend(['--reference', str(reference)])
if single_branch:
if git_version_used is None:
module.fail_json(msg='Cannot find git executable at %s' % git_path)
if git_version_used < LooseVersion('1.7.10'):
module.warn("git version '%s' is too old to use 'single-branch'. Ignoring." % git_version_used)
else:
cmd.append("--single-branch")
if is_branch_or_tag:
cmd.extend(['--branch', version])
needs_separate_git_dir_fallback = False
if separate_git_dir:
if git_version_used is None:
module.fail_json(msg='Cannot find git executable at %s' % git_path)
if git_version_used < LooseVersion('1.7.5'):
# git before 1.7.5 doesn't have separate-git-dir argument, do fallback
needs_separate_git_dir_fallback = True
else:
cmd.append('--separate-git-dir=%s' % separate_git_dir)
cmd.extend([repo, dest])
module.run_command(cmd, check_rc=True, cwd=dest_dirname)
if needs_separate_git_dir_fallback:
relocate_repo(module, result, separate_git_dir, os.path.join(dest, ".git"), dest)
if bare and remote != 'origin':
module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)
if refspec:
cmd = [git_path, 'fetch']
if depth:
cmd.extend(['--depth', str(depth)])
cmd.extend([remote, refspec])
module.run_command(cmd, check_rc=True, cwd=dest)
if verify_commit:
verify_commit_sign(git_path, module, dest, version, gpg_whitelist)
def has_local_mods(module, git_path, dest, bare):
if bare:
return False
cmd = "%s status --porcelain" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
lines = stdout.splitlines()
lines = list(filter(lambda c: not re.search('^\\?\\?.*$', c), lines))
return len(lines) > 0
def reset(git_path, module, dest):
'''
Resets the index and working tree to HEAD.
Discards any changes to tracked files in working
tree since that commit.
'''
cmd = "%s reset --hard HEAD" % (git_path,)
return module.run_command(cmd, check_rc=True, cwd=dest)
def get_diff(module, git_path, dest, repo, remote, depth, bare, before, after):
''' Return the difference between 2 versions '''
if before is None:
return {'prepared': '>> Newly checked out %s' % after}
elif before != after:
# Ensure we have the object we are referring to during git diff !
git_version_used = git_version(git_path, module)
fetch(git_path, module, repo, dest, after, remote, depth, bare, '', git_version_used)
cmd = '%s diff %s %s' % (git_path, before, after)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc == 0 and out:
return {'prepared': out}
elif rc == 0:
return {'prepared': '>> No visual differences between %s and %s' % (before, after)}
elif err:
return {'prepared': '>> Failed to get proper diff between %s and %s:\n>> %s' % (before, after, err)}
else:
return {'prepared': '>> Failed to get proper diff between %s and %s' % (before, after)}
return {}
def get_remote_head(git_path, module, dest, version, remote, bare):
cloning = False
cwd = None
tag = False
if remote == module.params['repo']:
cloning = True
elif remote == 'file://' + os.path.expanduser(module.params['repo']):
cloning = True
else:
cwd = dest
if version == 'HEAD':
if cloning:
# cloning the repo, just get the remote's HEAD version
cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)
else:
head_branch = get_head_branch(git_path, module, dest, remote, bare)
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)
elif is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
elif is_remote_tag(git_path, module, dest, remote, version):
tag = True
cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)
else:
# appears to be a sha1. return as-is since it appears
# cannot check for a specific sha1 on remote
return version
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)
if len(out) < 1:
module.fail_json(msg="Could not determine remote revision for %s" % version, stdout=out, stderr=err, rc=rc)
out = to_native(out)
if tag:
# Find the dereferenced tag if this is an annotated tag.
for tag in out.split('\n'):
if tag.endswith(version + '^{}'):
out = tag
break
elif tag.endswith(version):
out = tag
rev = out.split()[0]
return rev
def is_remote_tag(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if to_native(version, errors='surrogate_or_strict') in out:
return True
else:
return False
def get_branches(git_path, module, dest):
branches = []
cmd = '%s branch --no-color -a' % (git_path,)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine branch data - received %s" % out, stdout=out, stderr=err)
for line in out.split('\n'):
if line.strip():
branches.append(line.strip())
return branches
def get_annotated_tags(git_path, module, dest):
tags = []
cmd = [git_path, 'for-each-ref', 'refs/tags/', '--format', '%(objecttype):%(refname:short)']
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine tag data - received %s" % out, stdout=out, stderr=err)
for line in to_native(out).split('\n'):
if line.strip():
tagtype, tagname = line.strip().split(':')
if tagtype == 'tag':
tags.append(tagname)
return tags
def is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if to_native(version, errors='surrogate_or_strict') in out:
return True
else:
return False
def is_local_branch(git_path, module, dest, branch):
branches = get_branches(git_path, module, dest)
lbranch = '%s' % branch
if lbranch in branches:
return True
elif '* %s' % branch in branches:
return True
else:
return False
def is_not_a_branch(git_path, module, dest):
branches = get_branches(git_path, module, dest)
for branch in branches:
if branch.startswith('* ') and ('no branch' in branch or 'detached from' in branch or 'detached at' in branch):
return True
return False
def get_repo_path(dest, bare):
if bare:
repo_path = dest
else:
repo_path = os.path.join(dest, '.git')
# Check if the .git is a file. If it is a file, it means that the repository is in external directory respective to the working copy (e.g. we are in a
# submodule structure).
if os.path.isfile(repo_path):
with open(repo_path, 'r') as gitfile:
data = gitfile.read()
ref_prefix, gitdir = data.rstrip().split('gitdir: ', 1)
if ref_prefix:
raise ValueError('.git file has invalid git dir reference format')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
# Use original destination directory with data from .git file.
repo_path = os.path.join(dest, gitdir)
if not os.path.isdir(repo_path):
raise ValueError('%s is not a directory' % repo_path)
return repo_path
def get_head_branch(git_path, module, dest, remote, bare=False):
'''
Determine what branch HEAD is associated with. This is partly
taken from lib/ansible/utils/__init__.py. It finds the correct
path to .git/HEAD and reads from that file the branch that HEAD is
associated with. In the case of a detached HEAD, this will look
up the branch in .git/refs/remotes/<remote>/HEAD.
'''
try:
repo_path = get_repo_path(dest, bare)
except (IOError, ValueError) as err:
# No repo path found
# ``.git`` file does not have a valid format for detached Git dir.
module.fail_json(
msg='Current repo does not have a valid reference to a '
'separate Git dir or it refers to the invalid path',
details=to_text(err),
)
# Read .git/HEAD for the name of the branch.
# If we're in a detached HEAD state, look up the branch associated with
# the remote HEAD in .git/refs/remotes/<remote>/HEAD
headfile = os.path.join(repo_path, "HEAD")
if is_not_a_branch(git_path, module, dest):
headfile = os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD')
branch = head_splitter(headfile, remote, module=module, fail_on_error=True)
return branch
def get_remote_url(git_path, module, dest, remote):
'''Return URL of remote source for repo.'''
command = [git_path, 'ls-remote', '--get-url', remote]
(rc, out, err) = module.run_command(command, cwd=dest)
if rc != 0:
# There was an issue getting remote URL, most likely
# command is not available in this version of Git.
return None
return to_native(out).rstrip('\n')
def set_remote_url(git_path, module, repo, dest, remote):
''' updates repo from remote sources '''
# Return if remote URL isn't changing.
remote_url = get_remote_url(git_path, module, dest, remote)
if remote_url == repo or unfrackgitpath(remote_url) == unfrackgitpath(repo):
return False
command = [git_path, 'remote', 'set-url', remote, repo]
(rc, out, err) = module.run_command(command, cwd=dest)
if rc != 0:
label = "set a new url %s for %s" % (repo, remote)
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
# Return False if remote_url is None to maintain previous behavior
# for Git versions prior to 1.7.5 that lack required functionality.
return remote_url is not None
def fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=False):
''' updates repo from remote sources '''
set_remote_url(git_path, module, repo, dest, remote)
commands = []
fetch_str = 'download remote objects and refs'
fetch_cmd = [git_path, 'fetch']
refspecs = []
if depth:
# try to find the minimal set of refs we need to fetch to get a
# successful checkout
currenthead = get_head_branch(git_path, module, dest, remote)
if refspec:
refspecs.append(refspec)
elif version == 'HEAD':
refspecs.append(currenthead)
elif is_remote_branch(git_path, module, dest, repo, version):
if currenthead != version:
# this workaround is only needed for older git versions
# 1.8.3 is broken, 1.9.x works
# ensure that remote branch is available as both local and remote ref
refspecs.append('+refs/heads/%s:refs/heads/%s' % (version, version))
refspecs.append('+refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version))
elif is_remote_tag(git_path, module, dest, repo, version):
refspecs.append('+refs/tags/' + version + ':refs/tags/' + version)
if refspecs:
# if refspecs is empty, i.e. version is neither heads nor tags
# assume it is a version hash
# fall back to a full clone, otherwise we might not be able to checkout
# version
fetch_cmd.extend(['--depth', str(depth)])
if not depth or not refspecs:
# don't try to be minimalistic but do a full clone
# also do this if depth is given, but version is something that can't be fetched directly
if bare:
refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']
else:
# ensure all tags are fetched
if git_version_used >= LooseVersion('1.9'):
fetch_cmd.append('--tags')
else:
# old git versions have a bug in --tags that prevents updating existing tags
commands.append((fetch_str, fetch_cmd + [remote]))
refspecs = ['+refs/tags/*:refs/tags/*']
if refspec:
refspecs.append(refspec)
if force:
fetch_cmd.append('--force')
fetch_cmd.extend([remote])
commands.append((fetch_str, fetch_cmd + refspecs))
for (label, command) in commands:
(rc, out, err) = module.run_command(command, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err), cmd=command)
def submodules_fetch(git_path, module, remote, track_submodules, dest):
changed = False
if not os.path.exists(os.path.join(dest, '.gitmodules')):
# no submodules
return changed
gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')
for line in gitmodules_file:
# Check for new submodules
if not changed and line.strip().startswith('path'):
path = line.split('=', 1)[1].strip()
# Check that dest/path/.git exists
if not os.path.exists(os.path.join(dest, path, '.git')):
changed = True
# Check for updates to existing modules
if not changed:
# Fetch updates
begin = get_submodule_versions(git_path, module, dest)
cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to fetch submodules: %s" % out + err)
if track_submodules:
# Compare against submodule HEAD
# FIXME: determine this from .gitmodules
version = 'master'
after = get_submodule_versions(git_path, module, dest, '%s/%s' % (remote, version))
if begin != after:
changed = True
else:
# Compare against the superproject's expectation
cmd = [git_path, 'submodule', 'status']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)
for line in out.splitlines():
if line[0] != ' ':
changed = True
break
return changed
def submodule_update(git_path, module, dest, track_submodules, force=False):
''' init and update any submodules '''
# get the valid submodule params
params = get_submodule_update_params(module, git_path, dest)
# skip submodule commands if .gitmodules is not present
if not os.path.exists(os.path.join(dest, '.gitmodules')):
return (0, '', '')
cmd = [git_path, 'submodule', 'sync']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if 'remote' in params and track_submodules:
cmd = [git_path, 'submodule', 'update', '--init', '--recursive', '--remote']
else:
cmd = [git_path, 'submodule', 'update', '--init', '--recursive']
if force:
cmd.append('--force')
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to init/update submodules: %s" % out + err)
return (rc, out, err)
def set_remote_branch(git_path, module, dest, remote, version, depth):
"""set refs for the remote branch version
This assumes the branch does not yet exist locally and is therefore also not checked out.
Can't use git remote set-branches, as it is not available in git 1.7.1 (centos6)
"""
branchref = "+refs/heads/%s:refs/heads/%s" % (version, version)
branchref += ' +refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version)
cmd = "%s fetch --depth=%s %s %s" % (git_path, depth, remote, branchref)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to fetch branch from remote: %s" % version, stdout=out, stderr=err, rc=rc)
def switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist):
cmd = ''
if version == 'HEAD':
branch = get_head_branch(git_path, module, dest, remote)
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % branch,
stdout=out, stderr=err, rc=rc)
cmd = "%s reset --hard %s/%s --" % (git_path, remote, branch)
else:
# FIXME check for local_branch first, should have been fetched already
if is_remote_branch(git_path, module, dest, remote, version):
if depth and not is_local_branch(git_path, module, dest, version):
# git clone --depth implies --single-branch, which makes
# the checkout fail if the version changes
# fetch the remote branch, to be able to check it out next
set_remote_branch(git_path, module, dest, remote, version, depth)
if not is_local_branch(git_path, module, dest, version):
cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version)
else:
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % version, stdout=out, stderr=err, rc=rc)
cmd = "%s reset --hard %s/%s" % (git_path, remote, version)
else:
cmd = "%s checkout --force %s" % (git_path, version)
(rc, out1, err1) = module.run_command(cmd, cwd=dest)
if rc != 0:
if version != 'HEAD':
module.fail_json(msg="Failed to checkout %s" % (version),
stdout=out1, stderr=err1, rc=rc, cmd=cmd)
else:
module.fail_json(msg="Failed to checkout branch %s" % (branch),
stdout=out1, stderr=err1, rc=rc, cmd=cmd)
if verify_commit:
verify_commit_sign(git_path, module, dest, version, gpg_whitelist)
return (rc, out1, err1)
def verify_commit_sign(git_path, module, dest, version, gpg_whitelist):
if version in get_annotated_tags(git_path, module, dest):
git_sub = "verify-tag"
else:
git_sub = "verify-commit"
cmd = "%s %s %s" % (git_path, git_sub, version)
if gpg_whitelist:
cmd += " --raw"
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version, stdout=out, stderr=err, rc=rc)
if gpg_whitelist:
fingerprint = get_gpg_fingerprint(err)
if fingerprint not in gpg_whitelist:
module.fail_json(msg='The gpg_whitelist does not include the public key "%s" for this commit' % fingerprint, stdout=out, stderr=err, rc=rc)
return (rc, out, err)
def get_gpg_fingerprint(output):
"""Return a fingerprint of the primary key.
Ref:
https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS;hb=HEAD#l482
"""
for line in output.splitlines():
data = line.split()
if data[1] != 'VALIDSIG':
continue
# if signed with a subkey, this contains the primary key fingerprint
data_id = 11 if len(data) == 11 else 2
return data[data_id]
def git_version(git_path, module):
"""return the installed version of git"""
cmd = "%s --version" % git_path
(rc, out, err) = module.run_command(cmd)
if rc != 0:
# one could fail_json here, but the version info is not that important,
# so let's try to fail only on actual git commands
return None
rematch = re.search('git version (.*)$', to_native(out))
if not rematch:
return None
return LooseVersion(rematch.groups()[0])
def git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version):
""" Create git archive in given source directory """
cmd = [git_path, 'archive', '--format', archive_fmt, '--output', archive, version]
if archive_prefix is not None:
cmd.insert(-1, '--prefix')
cmd.insert(-1, archive_prefix)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to perform archive operation",
details="Git archive command failed to create "
"archive %s using %s directory."
"Error: %s" % (archive, dest, err))
return rc, out, err
def create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result):
""" Helper function for creating archive using git_archive """
all_archive_fmt = {'.zip': 'zip', '.gz': 'tar.gz', '.tar': 'tar',
'.tgz': 'tgz'}
dummy, archive_ext = os.path.splitext(archive)
archive_fmt = all_archive_fmt.get(archive_ext, None)
if archive_fmt is None:
module.fail_json(msg="Unable to get file extension from "
"archive file name : %s" % archive,
details="Please specify archive as filename with "
"extension. File extension can be one "
"of ['tar', 'tar.gz', 'zip', 'tgz']")
repo_name = repo.split("/")[-1].replace(".git", "")
if os.path.exists(archive):
# If git archive file exists, then compare it with new git archive file.
# if match, do nothing
# if does not match, then replace existing with temp archive file.
tempdir = tempfile.mkdtemp()
new_archive_dest = os.path.join(tempdir, repo_name)
new_archive = new_archive_dest + '.' + archive_fmt
git_archive(git_path, module, dest, new_archive, archive_fmt, archive_prefix, version)
# filecmp is supposed to be efficient than md5sum checksum
if filecmp.cmp(new_archive, archive):
result.update(changed=False)
# Cleanup before exiting
try:
shutil.rmtree(tempdir)
except OSError:
pass
else:
try:
shutil.move(new_archive, archive)
shutil.rmtree(tempdir)
result.update(changed=True)
except OSError as e:
module.fail_json(msg="Failed to move %s to %s" %
(new_archive, archive),
details=u"Error occurred while moving : %s"
% to_text(e))
else:
# Perform archive from local directory
git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version)
result.update(changed=True)
# ===========================================
def main():
module = AnsibleModule(
argument_spec=dict(
dest=dict(type='path'),
repo=dict(required=True, aliases=['name']),
version=dict(default='HEAD'),
remote=dict(default='origin'),
refspec=dict(default=None),
reference=dict(default=None),
force=dict(default='no', type='bool'),
depth=dict(default=None, type='int'),
clone=dict(default='yes', type='bool'),
update=dict(default='yes', type='bool'),
verify_commit=dict(default='no', type='bool'),
gpg_whitelist=dict(default=[], type='list', elements='str'),
accept_hostkey=dict(default='no', type='bool'),
accept_newhostkey=dict(default='no', type='bool'),
key_file=dict(default=None, type='path', required=False),
ssh_opts=dict(default=None, required=False),
executable=dict(default=None, type='path'),
bare=dict(default='no', type='bool'),
recursive=dict(default='yes', type='bool'),
single_branch=dict(default=False, type='bool'),
track_submodules=dict(default='no', type='bool'),
umask=dict(default=None, type='raw'),
archive=dict(type='path'),
archive_prefix=dict(),
separate_git_dir=dict(type='path'),
),
mutually_exclusive=[('separate_git_dir', 'bare'), ('accept_hostkey', 'accept_newhostkey')],
required_by={'archive_prefix': ['archive']},
supports_check_mode=True
)
dest = module.params['dest']
repo = module.params['repo']
version = module.params['version']
remote = module.params['remote']
refspec = module.params['refspec']
force = module.params['force']
depth = module.params['depth']
update = module.params['update']
allow_clone = module.params['clone']
bare = module.params['bare']
verify_commit = module.params['verify_commit']
gpg_whitelist = module.params['gpg_whitelist']
reference = module.params['reference']
single_branch = module.params['single_branch']
git_path = module.params['executable'] or module.get_bin_path('git', True)
key_file = module.params['key_file']
ssh_opts = module.params['ssh_opts']
umask = module.params['umask']
archive = module.params['archive']
archive_prefix = module.params['archive_prefix']
separate_git_dir = module.params['separate_git_dir']
result = dict(changed=False, warnings=list())
if module.params['accept_hostkey']:
if ssh_opts is not None:
if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts):
ssh_opts += " -o StrictHostKeyChecking=no"
else:
ssh_opts = "-o StrictHostKeyChecking=no"
if module.params['accept_newhostkey']:
if not ssh_supports_acceptnewhostkey(module):
module.warn("Your ssh client does not support accept_newhostkey option, therefore it cannot be used.")
else:
if ssh_opts is not None:
if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts):
ssh_opts += " -o StrictHostKeyChecking=accept-new"
else:
ssh_opts = "-o StrictHostKeyChecking=accept-new"
# evaluate and set the umask before doing anything else
if umask is not None:
if not isinstance(umask, string_types):
module.fail_json(msg="umask must be defined as a quoted octal integer")
try:
umask = int(umask, 8)
except Exception:
module.fail_json(msg="umask must be an octal integer",
details=to_text(sys.exc_info()[1]))
os.umask(umask)
# Certain features such as depth require a file:/// protocol for path based urls
# so force a protocol here ...
if os.path.expanduser(repo).startswith('/'):
repo = 'file://' + os.path.expanduser(repo)
# We screenscrape a huge amount of git commands so use C locale anytime we
# call run_command()
locale = get_best_parsable_locale(module)
module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LC_CTYPE=locale)
if separate_git_dir:
separate_git_dir = os.path.realpath(separate_git_dir)
gitconfig = None
if not dest and allow_clone:
module.fail_json(msg="the destination directory must be specified unless clone=no")
elif dest:
dest = os.path.abspath(dest)
try:
repo_path = get_repo_path(dest, bare)
if separate_git_dir and os.path.exists(repo_path) and separate_git_dir != repo_path:
result['changed'] = True
if not module.check_mode:
relocate_repo(module, result, separate_git_dir, repo_path, dest)
repo_path = separate_git_dir
except (IOError, ValueError) as err:
# No repo path found
# ``.git`` file does not have a valid format for detached Git dir.
module.fail_json(
msg='Current repo does not have a valid reference to a '
'separate Git dir or it refers to the invalid path',
details=to_text(err),
)
gitconfig = os.path.join(repo_path, 'config')
# iface changes so need it to make decisions
git_version_used = git_version(git_path, module)
# GIT_SSH=<path> as an environment variable, might create sh wrapper script for older versions.
set_git_ssh_env(key_file, ssh_opts, git_version_used, module)
if depth is not None and git_version_used < LooseVersion('1.9.1'):
module.warn("git version is too old to fully support the depth argument. Falling back to full checkouts.")
depth = None
recursive = module.params['recursive']
track_submodules = module.params['track_submodules']
result.update(before=None)
local_mods = False
if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):
# if there is no git configuration, do a clone operation unless:
# * the user requested no clone (they just want info)
# * we're doing a check mode test
# In those cases we do an ls-remote
if module.check_mode or not allow_clone:
remote_head = get_remote_head(git_path, module, dest, version, repo, bare)
result.update(changed=True, after=remote_head)
if module._diff:
diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after'])
if diff:
result['diff'] = diff
module.exit_json(**result)
# there's no git config, so clone
clone(git_path, module, repo, dest, remote, depth, version, bare, reference,
refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch)
elif not update:
# Just return having found a repo already in the dest path
# this does no checking that the repo is the actual repo
# requested.
result['before'] = get_version(module, git_path, dest)
result.update(after=result['before'])
if archive:
# Git archive is not supported by all git servers, so
# we will first clone and perform git archive from local directory
if module.check_mode:
result.update(changed=True)
module.exit_json(**result)
create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result)
module.exit_json(**result)
else:
# else do a pull
local_mods = has_local_mods(module, git_path, dest, bare)
result['before'] = get_version(module, git_path, dest)
if local_mods:
# failure should happen regardless of check mode
if not force:
module.fail_json(msg="Local modifications exist in the destination: " + dest + " (force=no).", **result)
# if force and in non-check mode, do a reset
if not module.check_mode:
reset(git_path, module, dest)
result.update(changed=True, msg='Local modifications exist in the destination: ' + dest)
# exit if already at desired sha version
if module.check_mode:
remote_url = get_remote_url(git_path, module, dest, remote)
remote_url_changed = remote_url and remote_url != repo and unfrackgitpath(remote_url) != unfrackgitpath(repo)
else:
remote_url_changed = set_remote_url(git_path, module, repo, dest, remote)
result.update(remote_url_changed=remote_url_changed)
if module.check_mode:
remote_head = get_remote_head(git_path, module, dest, version, remote, bare)
result.update(changed=(result['before'] != remote_head or remote_url_changed), after=remote_head)
# FIXME: This diff should fail since the new remote_head is not fetched yet?!
if module._diff:
diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after'])
if diff:
result['diff'] = diff
module.exit_json(**result)
else:
fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=force)
result['after'] = get_version(module, git_path, dest)
# switch to version specified regardless of whether
# we got new revisions from the repository
if not bare:
switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist)
# Deal with submodules
submodules_updated = False
if recursive and not bare:
submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)
if submodules_updated:
result.update(submodules_changed=submodules_updated)
if module.check_mode:
result.update(changed=True, after=remote_head)
module.exit_json(**result)
# Switch to version specified
submodule_update(git_path, module, dest, track_submodules, force=force)
# determine if we changed anything
result['after'] = get_version(module, git_path, dest)
if result['before'] != result['after'] or local_mods or submodules_updated or remote_url_changed:
result.update(changed=True)
if module._diff:
diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after'])
if diff:
result['diff'] = diff
if archive:
# Git archive is not supported by all git servers, so
# we will first clone and perform git archive from local directory
if module.check_mode:
result.update(changed=True)
module.exit_json(**result)
create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,975 |
inconsistency between systemd checking with service_facts and service modules
|
### Summary
There is some inconsistent code for checking if systemd exists between the service module and the service_facts module:
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service.py#L480
and
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service_facts.py#L246
This causes some odd behavior when using the docker systemctl replacement script: https://github.com/gdraheim/docker-systemctl-replacement
The advice when using this script is to create the canary directory /run/systemd/system but when using service_facts module no service facts are collected as the canary folder check is not present so it goes on to inspect the contents of /proc/1/comm, which in this situation is "systemctl" instead of "systemd".
Ideally both modules would use the exact same code to check for systemd. And to help make this work with the docker systemctl replacement script it would be great if the canary folder check could be added to service_facts module.
Note, docker-systemctl-replacement is a recommendation for using systemctl commands inside containers such as for molecule testing as systemd inside containers are difficult to configure correctly (I can attest to this!). We use molecule to validate our AWS AMI packer builds which use ansible, since we are creating machine images we have to interact with systemctl for our unit tests to verify as service was installed and running etc.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/service_facts.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/.local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Dec 21 2022, 10:57:18) [GCC 8.5.0 20210514 (Red Hat 8.5.0-17)] (/usr/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.pvault
```
### OS / Environment
# cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
create a container that uses the docker-systemctl-replacement script:
```
# wget https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py
# then add this to Dockerfile
COPY systemctl.py /usr/bin/systemctl
RUN chmod 755 /usr/bin/systemctl
RUN mkdir /run/systemd/system/
CMD ["/usr/bin/systemctl"]
```
Build container and run, /proc/1/comm is now systemctl instead of systemd but the canary dir /run/systemd/system is present per https://github.com/gdraheim/docker-systemctl-replacement/blob/master/SERVICE-MANAGER.md
from within the container try to use the service_facts module:
```
- name: "Collect facts about system services."
service_facts:
register: services_state
- debug:
msg:
"service_facts: ": "{{ services_state }}"
```
output:
```
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
### Expected Results
If I add the check for the canary folders to service_facts.py I get the expected results:
```
TASK [Collect facts about system services.] ************************************
ok: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"ansible_facts": {
"services": {
"README.service": {
"name": "README.service",
"source": "systemd",
"state": "stopped",
"status": "disabled"
},
"amazon-cloudwatch-agent.service": {
"name": "amazon-cloudwatch-agent.service",
"source": "systemd",
"state": "stopped",
"status": "enabled"
},
<snip>
```
### Actual Results
```console
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80975
|
https://github.com/ansible/ansible/pull/81809
|
bf29458726496ee759f515cefe9e91fc26a533bd
|
e8ef6b7d7c6fb0ee2b08107f2a79ed747c56b86b
| 2023-06-05T21:03:11Z |
python
| 2023-10-26T02:09:46Z |
changelogs/fragments/80975-systemd-detect.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,975 |
inconsistency between systemd checking with service_facts and service modules
|
### Summary
There is some inconsistent code for checking if systemd exists between the service module and the service_facts module:
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service.py#L480
and
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service_facts.py#L246
This causes some odd behavior when using the docker systemctl replacement script: https://github.com/gdraheim/docker-systemctl-replacement
The advice when using this script is to create the canary directory /run/systemd/system but when using service_facts module no service facts are collected as the canary folder check is not present so it goes on to inspect the contents of /proc/1/comm, which in this situation is "systemctl" instead of "systemd".
Ideally both modules would use the exact same code to check for systemd. And to help make this work with the docker systemctl replacement script it would be great if the canary folder check could be added to service_facts module.
Note, docker-systemctl-replacement is a recommendation for using systemctl commands inside containers such as for molecule testing as systemd inside containers are difficult to configure correctly (I can attest to this!). We use molecule to validate our AWS AMI packer builds which use ansible, since we are creating machine images we have to interact with systemctl for our unit tests to verify as service was installed and running etc.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/service_facts.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/.local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Dec 21 2022, 10:57:18) [GCC 8.5.0 20210514 (Red Hat 8.5.0-17)] (/usr/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.pvault
```
### OS / Environment
# cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
create a container that uses the docker-systemctl-replacement script:
```
# wget https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py
# then add this to Dockerfile
COPY systemctl.py /usr/bin/systemctl
RUN chmod 755 /usr/bin/systemctl
RUN mkdir /run/systemd/system/
CMD ["/usr/bin/systemctl"]
```
Build container and run, /proc/1/comm is now systemctl instead of systemd but the canary dir /run/systemd/system is present per https://github.com/gdraheim/docker-systemctl-replacement/blob/master/SERVICE-MANAGER.md
from within the container try to use the service_facts module:
```
- name: "Collect facts about system services."
service_facts:
register: services_state
- debug:
msg:
"service_facts: ": "{{ services_state }}"
```
output:
```
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
### Expected Results
If I add the check for the canary folders to service_facts.py I get the expected results:
```
TASK [Collect facts about system services.] ************************************
ok: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"ansible_facts": {
"services": {
"README.service": {
"name": "README.service",
"source": "systemd",
"state": "stopped",
"status": "disabled"
},
"amazon-cloudwatch-agent.service": {
"name": "amazon-cloudwatch-agent.service",
"source": "systemd",
"state": "stopped",
"status": "enabled"
},
<snip>
```
### Actual Results
```console
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80975
|
https://github.com/ansible/ansible/pull/81809
|
bf29458726496ee759f515cefe9e91fc26a533bd
|
e8ef6b7d7c6fb0ee2b08107f2a79ed747c56b86b
| 2023-06-05T21:03:11Z |
python
| 2023-10-26T02:09:46Z |
lib/ansible/module_utils/service.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c) Ansible Inc, 2016
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import annotations
import glob
import os
import pickle
import platform
import select
import shlex
import subprocess
import traceback
from ansible.module_utils.six import PY2, b
from ansible.module_utils.common.text.converters import to_bytes, to_text
def sysv_is_enabled(name, runlevel=None):
'''
This function will check if the service name supplied
is enabled in any of the sysv runlevels
:arg name: name of the service to test for
:kw runlevel: runlevel to check (default: None)
'''
if runlevel:
if not os.path.isdir('/etc/rc0.d/'):
return bool(glob.glob('/etc/init.d/rc%s.d/S??%s' % (runlevel, name)))
return bool(glob.glob('/etc/rc%s.d/S??%s' % (runlevel, name)))
else:
if not os.path.isdir('/etc/rc0.d/'):
return bool(glob.glob('/etc/init.d/rc?.d/S??%s' % name))
return bool(glob.glob('/etc/rc?.d/S??%s' % name))
def get_sysv_script(name):
'''
This function will return the expected path for an init script
corresponding to the service name supplied.
:arg name: name or path of the service to test for
'''
if name.startswith('/'):
result = name
else:
result = '/etc/init.d/%s' % name
return result
def sysv_exists(name):
'''
This function will return True or False depending on
the existence of an init script corresponding to the service name supplied.
:arg name: name of the service to test for
'''
return os.path.exists(get_sysv_script(name))
def get_ps(module, pattern):
'''
Last resort to find a service by trying to match pattern to programs in memory
'''
found = False
if platform.system() == 'SunOS':
flags = '-ef'
else:
flags = 'auxww'
psbin = module.get_bin_path('ps', True)
(rc, psout, pserr) = module.run_command([psbin, flags])
if rc == 0:
for line in psout.splitlines():
if pattern in line:
# FIXME: should add logic to prevent matching 'self', though that should be extremely rare
found = True
break
return found
def fail_if_missing(module, found, service, msg=''):
'''
This function will return an error or exit gracefully depending on check mode status
and if the service is missing or not.
:arg module: is an AnsibleModule object, used for it's utility methods
:arg found: boolean indicating if services was found or not
:arg service: name of service
:kw msg: extra info to append to error/success msg when missing
'''
if not found:
module.fail_json(msg='Could not find the requested service %s: %s' % (service, msg))
def fork_process():
'''
This function performs the double fork process to detach from the
parent process and execute.
'''
pid = os.fork()
if pid == 0:
# Set stdin/stdout/stderr to /dev/null
fd = os.open(os.devnull, os.O_RDWR)
# clone stdin/out/err
for num in range(3):
if fd != num:
os.dup2(fd, num)
# close otherwise
if fd not in range(3):
os.close(fd)
# Make us a daemon
pid = os.fork()
# end if not in child
if pid > 0:
os._exit(0)
# get new process session and detach
sid = os.setsid()
if sid == -1:
raise Exception("Unable to detach session while daemonizing")
# avoid possible problems with cwd being removed
os.chdir("/")
pid = os.fork()
if pid > 0:
os._exit(0)
return pid
def daemonize(module, cmd):
'''
Execute a command while detaching as a daemon, returns rc, stdout, and stderr.
:arg module: is an AnsibleModule object, used for it's utility methods
:arg cmd: is a list or string representing the command and options to run
This is complex because daemonization is hard for people.
What we do is daemonize a part of this module, the daemon runs the command,
picks up the return code and output, and returns it to the main process.
'''
# init some vars
chunk = 4096 # FIXME: pass in as arg?
errors = 'surrogate_or_strict'
# start it!
try:
pipe = os.pipe()
pid = fork_process()
except OSError:
module.fail_json(msg="Error while attempting to fork: %s", exception=traceback.format_exc())
except Exception as exc:
module.fail_json(msg=to_text(exc), exception=traceback.format_exc())
# we don't do any locking as this should be a unique module/process
if pid == 0:
os.close(pipe[0])
# if command is string deal with py2 vs py3 conversions for shlex
if not isinstance(cmd, list):
if PY2:
cmd = shlex.split(to_bytes(cmd, errors=errors))
else:
cmd = shlex.split(to_text(cmd, errors=errors))
# make sure we always use byte strings
run_cmd = []
for c in cmd:
run_cmd.append(to_bytes(c, errors=errors))
# execute the command in forked process
p = subprocess.Popen(run_cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=lambda: os.close(pipe[1]))
fds = [p.stdout, p.stderr]
# loop reading output till it is done
output = {p.stdout: b(""), p.stderr: b("")}
while fds:
rfd, wfd, efd = select.select(fds, [], fds, 1)
if (rfd + wfd + efd) or p.poll() is None:
for out in list(fds):
if out in rfd:
data = os.read(out.fileno(), chunk)
if data:
output[out] += to_bytes(data, errors=errors)
else:
fds.remove(out)
else:
break
# even after fds close, we might want to wait for pid to die
p.wait()
# Return a pickled data of parent
return_data = pickle.dumps([p.returncode, to_text(output[p.stdout]), to_text(output[p.stderr])], protocol=pickle.HIGHEST_PROTOCOL)
os.write(pipe[1], to_bytes(return_data, errors=errors))
# clean up
os.close(pipe[1])
os._exit(0)
elif pid == -1:
module.fail_json(msg="Unable to fork, no exception thrown, probably due to lack of resources, check logs.")
else:
# in parent
os.close(pipe[1])
os.waitpid(pid, 0)
# Grab response data after child finishes
return_data = b("")
while True:
rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]])
if pipe[0] in rfd:
data = os.read(pipe[0], chunk)
if not data:
break
return_data += to_bytes(data, errors=errors)
# Note: no need to specify encoding on py3 as this module sends the
# pickle to itself (thus same python interpreter so we aren't mixing
# py2 and py3)
return pickle.loads(to_bytes(return_data, errors=errors))
def check_ps(module, pattern):
# Set ps flags
if platform.system() == 'SunOS':
psflags = '-ef'
else:
psflags = 'auxww'
# Find ps binary
psbin = module.get_bin_path('ps', True)
(rc, out, err) = module.run_command('%s %s' % (psbin, psflags))
# If rc is 0, set running as appropriate
if rc == 0:
for line in out.split('\n'):
if pattern in line:
return True
return False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,975 |
inconsistency between systemd checking with service_facts and service modules
|
### Summary
There is some inconsistent code for checking if systemd exists between the service module and the service_facts module:
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service.py#L480
and
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service_facts.py#L246
This causes some odd behavior when using the docker systemctl replacement script: https://github.com/gdraheim/docker-systemctl-replacement
The advice when using this script is to create the canary directory /run/systemd/system but when using service_facts module no service facts are collected as the canary folder check is not present so it goes on to inspect the contents of /proc/1/comm, which in this situation is "systemctl" instead of "systemd".
Ideally both modules would use the exact same code to check for systemd. And to help make this work with the docker systemctl replacement script it would be great if the canary folder check could be added to service_facts module.
Note, docker-systemctl-replacement is a recommendation for using systemctl commands inside containers such as for molecule testing as systemd inside containers are difficult to configure correctly (I can attest to this!). We use molecule to validate our AWS AMI packer builds which use ansible, since we are creating machine images we have to interact with systemctl for our unit tests to verify as service was installed and running etc.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/service_facts.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/.local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Dec 21 2022, 10:57:18) [GCC 8.5.0 20210514 (Red Hat 8.5.0-17)] (/usr/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.pvault
```
### OS / Environment
# cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
create a container that uses the docker-systemctl-replacement script:
```
# wget https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py
# then add this to Dockerfile
COPY systemctl.py /usr/bin/systemctl
RUN chmod 755 /usr/bin/systemctl
RUN mkdir /run/systemd/system/
CMD ["/usr/bin/systemctl"]
```
Build container and run, /proc/1/comm is now systemctl instead of systemd but the canary dir /run/systemd/system is present per https://github.com/gdraheim/docker-systemctl-replacement/blob/master/SERVICE-MANAGER.md
from within the container try to use the service_facts module:
```
- name: "Collect facts about system services."
service_facts:
register: services_state
- debug:
msg:
"service_facts: ": "{{ services_state }}"
```
output:
```
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
### Expected Results
If I add the check for the canary folders to service_facts.py I get the expected results:
```
TASK [Collect facts about system services.] ************************************
ok: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"ansible_facts": {
"services": {
"README.service": {
"name": "README.service",
"source": "systemd",
"state": "stopped",
"status": "disabled"
},
"amazon-cloudwatch-agent.service": {
"name": "amazon-cloudwatch-agent.service",
"source": "systemd",
"state": "stopped",
"status": "enabled"
},
<snip>
```
### Actual Results
```console
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80975
|
https://github.com/ansible/ansible/pull/81809
|
bf29458726496ee759f515cefe9e91fc26a533bd
|
e8ef6b7d7c6fb0ee2b08107f2a79ed747c56b86b
| 2023-06-05T21:03:11Z |
python
| 2023-10-26T02:09:46Z |
lib/ansible/modules/service.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
DOCUMENTATION = r'''
---
module: service
version_added: "0.1"
short_description: Manage services
description:
- Controls services on remote hosts. Supported init systems include BSD init,
OpenRC, SysV, Solaris SMF, systemd, upstart.
- This module acts as a proxy to the underlying service manager module. While all arguments will be passed to the
underlying module, not all modules support the same arguments. This documentation only covers the minimum intersection
of module arguments that all service manager modules support.
- This module is a proxy for multiple more specific service manager modules
(such as M(ansible.builtin.systemd) and M(ansible.builtin.sysvinit)).
This allows management of a heterogeneous environment of machines without creating a specific task for
each service manager. The module to be executed is determined by the O(use) option, which defaults to the
service manager discovered by M(ansible.builtin.setup). If M(ansible.builtin.setup) was not yet run, this module may run it.
- For Windows targets, use the M(ansible.windows.win_service) module instead.
options:
name:
description:
- Name of the service.
type: str
required: true
state:
description:
- V(started)/V(stopped) are idempotent actions that will not run
commands unless necessary.
- V(restarted) will always bounce the service.
- V(reloaded) will always reload.
- B(At least one of state and enabled are required.)
- Note that reloaded will start the service if it is not already started,
even if your chosen init system wouldn't normally.
type: str
choices: [ reloaded, restarted, started, stopped ]
sleep:
description:
- If the service is being V(restarted) then sleep this many seconds
between the stop and start command.
- This helps to work around badly-behaving init scripts that exit immediately
after signaling a process to stop.
- Not all service managers support sleep, i.e when using systemd this setting will be ignored.
type: int
version_added: "1.3"
pattern:
description:
- If the service does not respond to the status command, name a
substring to look for as would be found in the output of the I(ps)
command as a stand-in for a status result.
- If the string is found, the service will be assumed to be started.
- While using remote hosts with systemd this setting will be ignored.
type: str
version_added: "0.7"
enabled:
description:
- Whether the service should start on boot.
- B(At least one of state and enabled are required.)
type: bool
runlevel:
description:
- For OpenRC init scripts (e.g. Gentoo) only.
- The runlevel that this service belongs to.
- While using remote hosts with systemd this setting will be ignored.
type: str
default: default
arguments:
description:
- Additional arguments provided on the command line.
- While using remote hosts with systemd this setting will be ignored.
type: str
default: ''
aliases: [ args ]
use:
description:
- The service module actually uses system specific modules, normally through auto detection, this setting can force a specific module.
- Normally it uses the value of the 'ansible_service_mgr' fact and falls back to the old 'service' module when none matching is found.
- The 'old service module' still uses autodetection and in no way does it correspond to the C(service) command.
type: str
default: auto
version_added: 2.2
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
support: full
async:
support: full
bypass_host_loop:
support: none
check_mode:
details: support depends on the underlying plugin invoked
support: N/A
diff_mode:
details: support depends on the underlying plugin invoked
support: N/A
platform:
details: The support depends on the availability for the specific plugin for each platform and if fact gathering is able to detect it
platforms: all
notes:
- For AIX, group subsystem names can be used.
- The C(service) command line utility is not part of any service manager system but a convenience.
It does not have a standard implementation across systems, and this action cannot use it directly.
Though it might be used if found in certain circumstances, the detected system service manager is normally preferred.
seealso:
- module: ansible.windows.win_service
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Start service httpd, if not started
ansible.builtin.service:
name: httpd
state: started
- name: Stop service httpd, if started
ansible.builtin.service:
name: httpd
state: stopped
- name: Restart service httpd, in all cases
ansible.builtin.service:
name: httpd
state: restarted
- name: Reload service httpd, in all cases
ansible.builtin.service:
name: httpd
state: reloaded
- name: Enable service httpd, and not touch the state
ansible.builtin.service:
name: httpd
enabled: yes
- name: Start service foo, based on running process /usr/bin/foo
ansible.builtin.service:
name: foo
pattern: /usr/bin/foo
state: started
- name: Restart network service for interface eth0
ansible.builtin.service:
name: network
state: restarted
args: eth0
'''
RETURN = r'''#'''
import glob
import json
import os
import platform
import re
import select
import shlex
import subprocess
import tempfile
import time
# The distutils module is not shipped with SUNWPython on Solaris.
# It's in the SUNWPython-devel package which also contains development files
# that don't belong on production boxes. Since our Solaris code doesn't
# depend on LooseVersion, do not import it on Solaris.
if platform.system() != 'SunOS':
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.service import fail_if_missing
from ansible.module_utils.six import PY2, b
class Service(object):
"""
This is the generic Service manipulation class that is subclassed
based on platform.
A subclass should override the following action methods:-
- get_service_tools
- service_enable
- get_service_status
- service_control
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Service)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.state = module.params['state']
self.sleep = module.params['sleep']
self.pattern = module.params['pattern']
self.enable = module.params['enabled']
self.runlevel = module.params['runlevel']
self.changed = False
self.running = None
self.crashed = None
self.action = None
self.svc_cmd = None
self.svc_initscript = None
self.svc_initctl = None
self.enable_cmd = None
self.arguments = module.params.get('arguments', '')
self.rcconf_file = None
self.rcconf_key = None
self.rcconf_value = None
self.svc_change = False
# ===========================================
# Platform specific methods (must be replaced by subclass).
def get_service_tools(self):
self.module.fail_json(msg="get_service_tools not implemented on target platform")
def service_enable(self):
self.module.fail_json(msg="service_enable not implemented on target platform")
def get_service_status(self):
self.module.fail_json(msg="get_service_status not implemented on target platform")
def service_control(self):
self.module.fail_json(msg="service_control not implemented on target platform")
# ===========================================
# Generic methods that should be used on all platforms.
def execute_command(self, cmd, daemonize=False):
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
# Most things don't need to be daemonized
if not daemonize:
# chkconfig localizes messages and we're screen scraping so make
# sure we use the C locale
return self.module.run_command(cmd, environ_update=lang_env)
# This is complex because daemonization is hard for people.
# What we do is daemonize a part of this module, the daemon runs the
# command, picks up the return code and output, and returns it to the
# main process.
pipe = os.pipe()
pid = os.fork()
if pid == 0:
os.close(pipe[0])
# Set stdin/stdout/stderr to /dev/null
fd = os.open(os.devnull, os.O_RDWR)
if fd != 0:
os.dup2(fd, 0)
if fd != 1:
os.dup2(fd, 1)
if fd != 2:
os.dup2(fd, 2)
if fd not in (0, 1, 2):
os.close(fd)
# Make us a daemon. Yes, that's all it takes.
pid = os.fork()
if pid > 0:
os._exit(0)
os.setsid()
os.chdir("/")
pid = os.fork()
if pid > 0:
os._exit(0)
# Start the command
if PY2:
# Python 2.6's shlex.split can't handle text strings correctly
cmd = to_bytes(cmd, errors='surrogate_or_strict')
cmd = shlex.split(cmd)
else:
# Python3.x shex.split text strings.
cmd = to_text(cmd, errors='surrogate_or_strict')
cmd = [to_bytes(c, errors='surrogate_or_strict') for c in shlex.split(cmd)]
# In either of the above cases, pass a list of byte strings to Popen
# chkconfig localizes messages and we're screen scraping so make
# sure we use the C locale
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=lang_env, preexec_fn=lambda: os.close(pipe[1]))
stdout = b("")
stderr = b("")
fds = [p.stdout, p.stderr]
# Wait for all output, or until the main process is dead and its output is done.
while fds:
rfd, wfd, efd = select.select(fds, [], fds, 1)
if not (rfd + wfd + efd) and p.poll() is not None:
break
if p.stdout in rfd:
dat = os.read(p.stdout.fileno(), 4096)
if not dat:
fds.remove(p.stdout)
stdout += dat
if p.stderr in rfd:
dat = os.read(p.stderr.fileno(), 4096)
if not dat:
fds.remove(p.stderr)
stderr += dat
p.wait()
# Return a JSON blob to parent
blob = json.dumps([p.returncode, to_text(stdout), to_text(stderr)])
os.write(pipe[1], to_bytes(blob, errors='surrogate_or_strict'))
os.close(pipe[1])
os._exit(0)
elif pid == -1:
self.module.fail_json(msg="unable to fork")
else:
os.close(pipe[1])
os.waitpid(pid, 0)
# Wait for data from daemon process and process it.
data = b("")
while True:
rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]])
if pipe[0] in rfd:
dat = os.read(pipe[0], 4096)
if not dat:
break
data += dat
return json.loads(to_text(data, errors='surrogate_or_strict'))
def check_ps(self):
# Set ps flags
if platform.system() == 'SunOS':
psflags = '-ef'
else:
psflags = 'auxww'
# Find ps binary
psbin = self.module.get_bin_path('ps', True)
(rc, psout, pserr) = self.execute_command('%s %s' % (psbin, psflags))
# If rc is 0, set running as appropriate
if rc == 0:
self.running = False
lines = psout.split("\n")
for line in lines:
if self.pattern in line and "pattern=" not in line:
# so as to not confuse ./hacking/test-module.py
self.running = True
break
def check_service_changed(self):
if self.state and self.running is None:
self.module.fail_json(msg="failed determining service state, possible typo of service name?")
# Find out if state has changed
if not self.running and self.state in ["reloaded", "started"]:
self.svc_change = True
elif self.running and self.state in ["reloaded", "stopped"]:
self.svc_change = True
elif self.state == "restarted":
self.svc_change = True
if self.module.check_mode and self.svc_change:
self.module.exit_json(changed=True, msg='service state changed')
def modify_service_state(self):
# Only do something if state will change
if self.svc_change:
# Control service
if self.state in ['started']:
self.action = "start"
elif not self.running and self.state == 'reloaded':
self.action = "start"
elif self.state == 'stopped':
self.action = "stop"
elif self.state == 'reloaded':
self.action = "reload"
elif self.state == 'restarted':
self.action = "restart"
if self.module.check_mode:
self.module.exit_json(changed=True, msg='changing service state')
return self.service_control()
else:
# If nothing needs to change just say all is well
rc = 0
err = ''
out = ''
return rc, out, err
def service_enable_rcconf(self):
if self.rcconf_file is None or self.rcconf_key is None or self.rcconf_value is None:
self.module.fail_json(msg="service_enable_rcconf() requires rcconf_file, rcconf_key and rcconf_value")
self.changed = None
entry = '%s="%s"\n' % (self.rcconf_key, self.rcconf_value)
with open(self.rcconf_file, "r") as RCFILE:
new_rc_conf = []
# Build a list containing the possibly modified file.
for rcline in RCFILE:
# Parse line removing whitespaces, quotes, etc.
rcarray = shlex.split(rcline, comments=True)
if len(rcarray) >= 1 and '=' in rcarray[0]:
(key, value) = rcarray[0].split("=", 1)
if key == self.rcconf_key:
if value.upper() == self.rcconf_value:
# Since the proper entry already exists we can stop iterating.
self.changed = False
break
else:
# We found the key but the value is wrong, replace with new entry.
rcline = entry
self.changed = True
# Add line to the list.
new_rc_conf.append(rcline.strip() + '\n')
# If we did not see any trace of our entry we need to add it.
if self.changed is None:
new_rc_conf.append(entry)
self.changed = True
if self.changed is True:
if self.module.check_mode:
self.module.exit_json(changed=True, msg="changing service enablement")
# Create a temporary file next to the current rc.conf (so we stay on the same filesystem).
# This way the replacement operation is atomic.
rcconf_dir = os.path.dirname(self.rcconf_file)
rcconf_base = os.path.basename(self.rcconf_file)
(TMP_RCCONF, tmp_rcconf_file) = tempfile.mkstemp(dir=rcconf_dir, prefix="%s-" % rcconf_base)
# Write out the contents of the list into our temporary file.
for rcline in new_rc_conf:
os.write(TMP_RCCONF, rcline.encode())
# Close temporary file.
os.close(TMP_RCCONF)
# Replace previous rc.conf.
self.module.atomic_move(tmp_rcconf_file, self.rcconf_file)
class LinuxService(Service):
"""
This is the Linux Service manipulation class - it is currently supporting
a mixture of binaries and init scripts for controlling services started at
boot, as well as for controlling the current state.
"""
platform = 'Linux'
distribution = None
def get_service_tools(self):
paths = ['/sbin', '/usr/sbin', '/bin', '/usr/bin']
binaries = ['service', 'chkconfig', 'update-rc.d', 'rc-service', 'rc-update', 'initctl', 'systemctl', 'start', 'stop', 'restart', 'insserv']
initpaths = ['/etc/init.d']
location = dict()
for binary in binaries:
location[binary] = self.module.get_bin_path(binary, opt_dirs=paths)
for initdir in initpaths:
initscript = "%s/%s" % (initdir, self.name)
if os.path.isfile(initscript):
self.svc_initscript = initscript
def check_systemd():
# tools must be installed
if location.get('systemctl', False):
# this should show if systemd is the boot init system
# these mirror systemd's own sd_boot test http://www.freedesktop.org/software/systemd/man/sd_booted.html
for canary in ["/run/systemd/system/", "/dev/.run/systemd/", "/dev/.systemd/"]:
if os.path.exists(canary):
return True
# If all else fails, check if init is the systemd command, using comm as cmdline could be symlink
try:
f = open('/proc/1/comm', 'r')
except IOError:
# If comm doesn't exist, old kernel, no systemd
return False
for line in f:
if 'systemd' in line:
return True
return False
# Locate a tool to enable/disable a service
if check_systemd():
# service is managed by systemd
self.__systemd_unit = self.name
self.svc_cmd = location['systemctl']
self.enable_cmd = location['systemctl']
elif location.get('initctl', False) and os.path.exists("/etc/init/%s.conf" % self.name):
# service is managed by upstart
self.enable_cmd = location['initctl']
# set the upstart version based on the output of 'initctl version'
self.upstart_version = LooseVersion('0.0.0')
try:
version_re = re.compile(r'\(upstart (.*)\)')
rc, stdout, stderr = self.module.run_command('%s version' % location['initctl'])
if rc == 0:
res = version_re.search(stdout)
if res:
self.upstart_version = LooseVersion(res.groups()[0])
except Exception:
pass # we'll use the default of 0.0.0
self.svc_cmd = location['initctl']
elif location.get('rc-service', False):
# service is managed by OpenRC
self.svc_cmd = location['rc-service']
self.enable_cmd = location['rc-update']
return # already have service start/stop tool too!
elif self.svc_initscript:
# service is managed by with SysV init scripts
if location.get('update-rc.d', False):
# and uses update-rc.d
self.enable_cmd = location['update-rc.d']
elif location.get('insserv', None):
# and uses insserv
self.enable_cmd = location['insserv']
elif location.get('chkconfig', False):
# and uses chkconfig
self.enable_cmd = location['chkconfig']
if self.enable_cmd is None:
fail_if_missing(self.module, False, self.name, msg='host')
# If no service control tool selected yet, try to see if 'service' is available
if self.svc_cmd is None and location.get('service', False):
self.svc_cmd = location['service']
# couldn't find anything yet
if self.svc_cmd is None and not self.svc_initscript:
self.module.fail_json(msg='cannot find \'service\' binary or init script for service, possible typo in service name?, aborting')
if location.get('initctl', False):
self.svc_initctl = location['initctl']
def get_systemd_service_enabled(self):
def sysv_exists(name):
script = '/etc/init.d/' + name
return os.access(script, os.X_OK)
def sysv_is_enabled(name):
return bool(glob.glob('/etc/rc?.d/S??' + name))
service_name = self.__systemd_unit
(rc, out, err) = self.execute_command("%s is-enabled %s" % (self.enable_cmd, service_name,))
if rc == 0:
return True
elif out.startswith('disabled'):
return False
elif sysv_exists(service_name):
return sysv_is_enabled(service_name)
else:
return False
def get_systemd_status_dict(self):
# Check status first as show will not fail if service does not exist
(rc, out, err) = self.execute_command("%s show '%s'" % (self.enable_cmd, self.__systemd_unit,))
if rc != 0:
self.module.fail_json(msg='failure %d running systemctl show for %r: %s' % (rc, self.__systemd_unit, err))
elif 'LoadState=not-found' in out:
self.module.fail_json(msg='systemd could not find the requested service "%r": %s' % (self.__systemd_unit, err))
key = None
value_buffer = []
status_dict = {}
for line in out.splitlines():
if '=' in line:
if not key:
key, value = line.split('=', 1)
# systemd fields that are shell commands can be multi-line
# We take a value that begins with a "{" as the start of
# a shell command and a line that ends with "}" as the end of
# the command
if value.lstrip().startswith('{'):
if value.rstrip().endswith('}'):
status_dict[key] = value
key = None
else:
value_buffer.append(value)
else:
status_dict[key] = value
key = None
else:
if line.rstrip().endswith('}'):
status_dict[key] = '\n'.join(value_buffer)
key = None
else:
value_buffer.append(value)
else:
value_buffer.append(value)
return status_dict
def get_systemd_service_status(self):
d = self.get_systemd_status_dict()
if d.get('ActiveState') == 'active':
# run-once services (for which a single successful exit indicates
# that they are running as designed) should not be restarted here.
# Thus, we are not checking d['SubState'].
self.running = True
self.crashed = False
elif d.get('ActiveState') == 'failed':
self.running = False
self.crashed = True
elif d.get('ActiveState') is None:
self.module.fail_json(msg='No ActiveState value in systemctl show output for %r' % (self.__systemd_unit,))
else:
self.running = False
self.crashed = False
return self.running
def get_service_status(self):
if self.svc_cmd and self.svc_cmd.endswith('systemctl'):
return self.get_systemd_service_status()
self.action = "status"
rc, status_stdout, status_stderr = self.service_control()
# if we have decided the service is managed by upstart, we check for some additional output...
if self.svc_initctl and self.running is None:
# check the job status by upstart response
initctl_rc, initctl_status_stdout, initctl_status_stderr = self.execute_command("%s status %s %s" % (self.svc_initctl, self.name, self.arguments))
if "stop/waiting" in initctl_status_stdout:
self.running = False
elif "start/running" in initctl_status_stdout:
self.running = True
if self.svc_cmd and self.svc_cmd.endswith("rc-service") and self.running is None:
openrc_rc, openrc_status_stdout, openrc_status_stderr = self.execute_command("%s %s status" % (self.svc_cmd, self.name))
self.running = "started" in openrc_status_stdout
self.crashed = "crashed" in openrc_status_stderr
# Prefer a non-zero return code. For reference, see:
# http://refspecs.linuxbase.org/LSB_4.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
if self.running is None and rc in [1, 2, 3, 4, 69]:
self.running = False
# if the job status is still not known check it by status output keywords
# Only check keywords if there's only one line of output (some init
# scripts will output verbosely in case of error and those can emit
# keywords that are picked up as false positives
if self.running is None and status_stdout.count('\n') <= 1:
# first transform the status output that could irritate keyword matching
cleanout = status_stdout.lower().replace(self.name.lower(), '')
if "stop" in cleanout:
self.running = False
elif "run" in cleanout:
self.running = not ("not " in cleanout)
elif "start" in cleanout and "not " not in cleanout:
self.running = True
elif 'could not access pid file' in cleanout:
self.running = False
elif 'is dead and pid file exists' in cleanout:
self.running = False
elif 'dead but subsys locked' in cleanout:
self.running = False
elif 'dead but pid file exists' in cleanout:
self.running = False
# if the job status is still not known and we got a zero for the
# return code, assume here that the service is running
if self.running is None and rc == 0:
self.running = True
# if the job status is still not known check it by special conditions
if self.running is None:
if self.name == 'iptables' and "ACCEPT" in status_stdout:
# iptables status command output is lame
# TODO: lookup if we can use a return code for this instead?
self.running = True
return self.running
def service_enable(self):
if self.enable_cmd is None:
self.module.fail_json(msg='cannot detect command to enable service %s, typo or init system potentially unknown' % self.name)
self.changed = True
action = None
#
# Upstart's initctl
#
if self.enable_cmd.endswith("initctl"):
def write_to_override_file(file_name, file_contents, ):
override_file = open(file_name, 'w')
override_file.write(file_contents)
override_file.close()
initpath = '/etc/init'
if self.upstart_version >= LooseVersion('0.6.7'):
manreg = re.compile(r'^manual\s*$', re.M | re.I)
config_line = 'manual\n'
else:
manreg = re.compile(r'^start on manual\s*$', re.M | re.I)
config_line = 'start on manual\n'
conf_file_name = "%s/%s.conf" % (initpath, self.name)
override_file_name = "%s/%s.override" % (initpath, self.name)
# Check to see if files contain the manual line in .conf and fail if True
with open(conf_file_name) as conf_file_fh:
conf_file_content = conf_file_fh.read()
if manreg.search(conf_file_content):
self.module.fail_json(msg="manual stanza not supported in a .conf file")
self.changed = False
if os.path.exists(override_file_name):
with open(override_file_name) as override_fh:
override_file_contents = override_fh.read()
# Remove manual stanza if present and service enabled
if self.enable and manreg.search(override_file_contents):
self.changed = True
override_state = manreg.sub('', override_file_contents)
# Add manual stanza if not present and service disabled
elif not (self.enable) and not (manreg.search(override_file_contents)):
self.changed = True
override_state = '\n'.join((override_file_contents, config_line))
# service already in desired state
else:
pass
# Add file with manual stanza if service disabled
elif not (self.enable):
self.changed = True
override_state = config_line
else:
# service already in desired state
pass
if self.module.check_mode:
self.module.exit_json(changed=self.changed)
# The initctl method of enabling and disabling services is much
# different than for the other service methods. So actually
# committing the change is done in this conditional and then we
# skip the boilerplate at the bottom of the method
if self.changed:
try:
write_to_override_file(override_file_name, override_state)
except Exception:
self.module.fail_json(msg='Could not modify override file')
return
#
# SysV's chkconfig
#
if self.enable_cmd.endswith("chkconfig"):
if self.enable:
action = 'on'
else:
action = 'off'
(rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name))
if 'chkconfig --add %s' % self.name in err:
self.execute_command("%s --add %s" % (self.enable_cmd, self.name))
(rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name))
if self.name not in out:
self.module.fail_json(msg="service %s does not support chkconfig" % self.name)
# TODO: look back on why this is here
# state = out.split()[-1]
# Check if we're already in the correct state
if "3:%s" % action in out and "5:%s" % action in out:
self.changed = False
return
#
# Systemd's systemctl
#
if self.enable_cmd.endswith("systemctl"):
if self.enable:
action = 'enable'
else:
action = 'disable'
# Check if we're already in the correct state
service_enabled = self.get_systemd_service_enabled()
# self.changed should already be true
if self.enable == service_enabled:
self.changed = False
return
#
# OpenRC's rc-update
#
if self.enable_cmd.endswith("rc-update"):
if self.enable:
action = 'add'
else:
action = 'delete'
(rc, out, err) = self.execute_command("%s show" % self.enable_cmd)
for line in out.splitlines():
service_name, runlevels = line.split('|')
service_name = service_name.strip()
if service_name != self.name:
continue
runlevels = re.split(r'\s+', runlevels)
# service already enabled for the runlevel
if self.enable and self.runlevel in runlevels:
self.changed = False
# service already disabled for the runlevel
elif not self.enable and self.runlevel not in runlevels:
self.changed = False
break
else:
# service already disabled altogether
if not self.enable:
self.changed = False
if not self.changed:
return
#
# update-rc.d style
#
if self.enable_cmd.endswith("update-rc.d"):
enabled = False
slinks = glob.glob('/etc/rc?.d/S??' + self.name)
if slinks:
enabled = True
if self.enable != enabled:
self.changed = True
if self.enable:
action = 'enable'
klinks = glob.glob('/etc/rc?.d/K??' + self.name)
if not klinks:
if not self.module.check_mode:
(rc, out, err) = self.execute_command("%s %s defaults" % (self.enable_cmd, self.name))
if rc != 0:
if err:
self.module.fail_json(msg=err)
else:
self.module.fail_json(msg=out) % (self.enable_cmd, self.name, action)
else:
action = 'disable'
if not self.module.check_mode:
(rc, out, err) = self.execute_command("%s %s %s" % (self.enable_cmd, self.name, action))
if rc != 0:
if err:
self.module.fail_json(msg=err)
else:
self.module.fail_json(msg=out) % (self.enable_cmd, self.name, action)
else:
self.changed = False
return
#
# insserv (Debian <=7, SLES, others)
#
if self.enable_cmd.endswith("insserv"):
if self.enable:
(rc, out, err) = self.execute_command("%s -n -v %s" % (self.enable_cmd, self.name))
else:
(rc, out, err) = self.execute_command("%s -n -r -v %s" % (self.enable_cmd, self.name))
self.changed = False
for line in err.splitlines():
if self.enable and line.find('enable service') != -1:
self.changed = True
break
if not self.enable and line.find('remove service') != -1:
self.changed = True
break
if self.module.check_mode:
self.module.exit_json(changed=self.changed)
if not self.changed:
return
if self.enable:
(rc, out, err) = self.execute_command("%s %s" % (self.enable_cmd, self.name))
if (rc != 0) or (err != ''):
self.module.fail_json(msg=("Failed to install service. rc: %s, out: %s, err: %s" % (rc, out, err)))
return (rc, out, err)
else:
(rc, out, err) = self.execute_command("%s -r %s" % (self.enable_cmd, self.name))
if (rc != 0) or (err != ''):
self.module.fail_json(msg=("Failed to remove service. rc: %s, out: %s, err: %s" % (rc, out, err)))
return (rc, out, err)
#
# If we've gotten to the end, the service needs to be updated
#
self.changed = True
# we change argument order depending on real binary used:
# rc-update and systemctl need the argument order reversed
if self.enable_cmd.endswith("rc-update"):
args = (self.enable_cmd, action, self.name + " " + self.runlevel)
elif self.enable_cmd.endswith("systemctl"):
args = (self.enable_cmd, action, self.__systemd_unit)
else:
args = (self.enable_cmd, self.name, action)
if self.module.check_mode:
self.module.exit_json(changed=self.changed)
(rc, out, err) = self.execute_command("%s %s %s" % args)
if rc != 0:
if err:
self.module.fail_json(msg="Error when trying to %s %s: rc=%s %s" % (action, self.name, rc, err))
else:
self.module.fail_json(msg="Failure for %s %s: rc=%s %s" % (action, self.name, rc, out))
return (rc, out, err)
def service_control(self):
# Decide what command to run
svc_cmd = ''
arguments = self.arguments
if self.svc_cmd:
if not self.svc_cmd.endswith("systemctl"):
if self.svc_cmd.endswith("initctl"):
# initctl commands take the form <cmd> <action> <name>
svc_cmd = self.svc_cmd
arguments = "%s %s" % (self.name, arguments)
else:
# SysV and OpenRC take the form <cmd> <name> <action>
svc_cmd = "%s %s" % (self.svc_cmd, self.name)
else:
# systemd commands take the form <cmd> <action> <name>
svc_cmd = self.svc_cmd
arguments = "%s %s" % (self.__systemd_unit, arguments)
elif self.svc_cmd is None and self.svc_initscript:
# upstart
svc_cmd = "%s" % self.svc_initscript
# In OpenRC, if a service crashed, we need to reset its status to
# stopped with the zap command, before we can start it back.
if self.svc_cmd and self.svc_cmd.endswith('rc-service') and self.action == 'start' and self.crashed:
self.execute_command("%s zap" % svc_cmd, daemonize=True)
if self.action != "restart":
if svc_cmd != '':
# upstart or systemd or OpenRC
rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True)
else:
# SysV
rc_state, stdout, stderr = self.execute_command("%s %s %s" % (self.action, self.name, arguments), daemonize=True)
elif self.svc_cmd and self.svc_cmd.endswith('rc-service'):
# All services in OpenRC support restart.
rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True)
else:
# In other systems, not all services support restart. Do it the hard way.
if svc_cmd != '':
# upstart or systemd
rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % (svc_cmd, 'stop', arguments), daemonize=True)
else:
# SysV
rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % ('stop', self.name, arguments), daemonize=True)
if self.sleep:
time.sleep(self.sleep)
if svc_cmd != '':
# upstart or systemd
rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % (svc_cmd, 'start', arguments), daemonize=True)
else:
# SysV
rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % ('start', self.name, arguments), daemonize=True)
# merge return information
if rc1 != 0 and rc2 == 0:
rc_state = rc2
stdout = stdout2
stderr = stderr2
else:
rc_state = rc1 + rc2
stdout = stdout1 + stdout2
stderr = stderr1 + stderr2
return (rc_state, stdout, stderr)
class FreeBsdService(Service):
"""
This is the FreeBSD Service manipulation class - it uses the /etc/rc.conf
file for controlling services started at boot and the 'service' binary to
check status and perform direct service manipulation.
"""
platform = 'FreeBSD'
distribution = None
def get_service_tools(self):
self.svc_cmd = self.module.get_bin_path('service', True)
if not self.svc_cmd:
self.module.fail_json(msg='unable to find service binary')
self.sysrc_cmd = self.module.get_bin_path('sysrc')
def get_service_status(self):
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'onestatus', self.arguments))
if self.name == "pf":
self.running = "Enabled" in stdout
else:
if rc == 1:
self.running = False
elif rc == 0:
self.running = True
def service_enable(self):
if self.enable:
self.rcconf_value = "YES"
else:
self.rcconf_value = "NO"
rcfiles = ['/etc/rc.conf', '/etc/rc.conf.local', '/usr/local/etc/rc.conf']
for rcfile in rcfiles:
if os.path.isfile(rcfile):
self.rcconf_file = rcfile
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'rcvar', self.arguments))
try:
rcvars = shlex.split(stdout, comments=True)
except Exception:
# TODO: add a warning to the output with the failure
pass
if not rcvars:
self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr)
# In rare cases, i.e. sendmail, rcvar can return several key=value pairs
# Usually there is just one, however. In other rare cases, i.e. uwsgi,
# rcvar can return extra uncommented data that is not at all related to
# the rcvar. We will just take the first key=value pair we come across
# and hope for the best.
for rcvar in rcvars:
if '=' in rcvar:
self.rcconf_key, default_rcconf_value = rcvar.split('=', 1)
break
if self.rcconf_key is None:
self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr)
if self.sysrc_cmd: # FreeBSD >= 9.2
rc, current_rcconf_value, stderr = self.execute_command("%s -n %s" % (self.sysrc_cmd, self.rcconf_key))
# it can happen that rcvar is not set (case of a system coming from the ports collection)
# so we will fallback on the default
if rc != 0:
current_rcconf_value = default_rcconf_value
if current_rcconf_value.strip().upper() != self.rcconf_value:
self.changed = True
if self.module.check_mode:
self.module.exit_json(changed=True, msg="changing service enablement")
rc, change_stdout, change_stderr = self.execute_command("%s %s=\"%s\"" % (self.sysrc_cmd, self.rcconf_key, self.rcconf_value))
if rc != 0:
self.module.fail_json(msg="unable to set rcvar using sysrc", stdout=change_stdout, stderr=change_stderr)
# sysrc does not exit with code 1 on permission error => validate successful change using service(8)
rc, check_stdout, check_stderr = self.execute_command("%s %s %s" % (self.svc_cmd, self.name, "enabled"))
if self.enable != (rc == 0): # rc = 0 indicates enabled service, rc = 1 indicates disabled service
self.module.fail_json(msg="unable to set rcvar: sysrc did not change value", stdout=change_stdout, stderr=change_stderr)
else:
self.changed = False
else: # Legacy (FreeBSD < 9.2)
try:
return self.service_enable_rcconf()
except Exception:
self.module.fail_json(msg='unable to set rcvar')
def service_control(self):
if self.action == "start":
self.action = "onestart"
if self.action == "stop":
self.action = "onestop"
if self.action == "reload":
self.action = "onereload"
ret = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, self.action, self.arguments))
if self.sleep:
time.sleep(self.sleep)
return ret
class DragonFlyBsdService(FreeBsdService):
"""
This is the DragonFly BSD Service manipulation class - it uses the /etc/rc.conf
file for controlling services started at boot and the 'service' binary to
check status and perform direct service manipulation.
"""
platform = 'DragonFly'
distribution = None
def service_enable(self):
if self.enable:
self.rcconf_value = "YES"
else:
self.rcconf_value = "NO"
rcfiles = ['/etc/rc.conf'] # Overkill?
for rcfile in rcfiles:
if os.path.isfile(rcfile):
self.rcconf_file = rcfile
self.rcconf_key = "%s" % self.name.replace("-", "_")
return self.service_enable_rcconf()
class OpenBsdService(Service):
"""
This is the OpenBSD Service manipulation class - it uses rcctl(8) or
/etc/rc.d scripts for service control. Enabling a service is
only supported if rcctl is present.
"""
platform = 'OpenBSD'
distribution = None
def get_service_tools(self):
self.enable_cmd = self.module.get_bin_path('rcctl')
if self.enable_cmd:
self.svc_cmd = self.enable_cmd
else:
rcdir = '/etc/rc.d'
rc_script = "%s/%s" % (rcdir, self.name)
if os.path.isfile(rc_script):
self.svc_cmd = rc_script
if not self.svc_cmd:
self.module.fail_json(msg='unable to find svc_cmd')
def get_service_status(self):
if self.enable_cmd:
rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svc_cmd, 'check', self.name))
else:
rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'check'))
if stderr:
self.module.fail_json(msg=stderr)
if rc == 1:
self.running = False
elif rc == 0:
self.running = True
def service_control(self):
if self.enable_cmd:
return self.execute_command("%s -f %s %s" % (self.svc_cmd, self.action, self.name), daemonize=True)
else:
return self.execute_command("%s -f %s" % (self.svc_cmd, self.action))
def service_enable(self):
if not self.enable_cmd:
return super(OpenBsdService, self).service_enable()
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'get', self.name, 'status'))
status_action = None
if self.enable:
if rc != 0:
status_action = "on"
elif self.enable is not None:
# should be explicit False at this point
if rc != 1:
status_action = "off"
if status_action is not None:
self.changed = True
if not self.module.check_mode:
rc, stdout, stderr = self.execute_command("%s set %s status %s" % (self.enable_cmd, self.name, status_action))
if rc != 0:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg="rcctl failed to modify service status")
class NetBsdService(Service):
"""
This is the NetBSD Service manipulation class - it uses the /etc/rc.conf
file for controlling services started at boot, check status and perform
direct service manipulation. Init scripts in /etc/rc.d are used for
controlling services (start/stop) as well as for controlling the current
state.
"""
platform = 'NetBSD'
distribution = None
def get_service_tools(self):
initpaths = ['/etc/rc.d'] # better: $rc_directories - how to get in here? Run: sh -c '. /etc/rc.conf ; echo $rc_directories'
for initdir in initpaths:
initscript = "%s/%s" % (initdir, self.name)
if os.path.isfile(initscript):
self.svc_initscript = initscript
if not self.svc_initscript:
self.module.fail_json(msg='unable to find rc.d script')
def service_enable(self):
if self.enable:
self.rcconf_value = "YES"
else:
self.rcconf_value = "NO"
rcfiles = ['/etc/rc.conf'] # Overkill?
for rcfile in rcfiles:
if os.path.isfile(rcfile):
self.rcconf_file = rcfile
self.rcconf_key = "%s" % self.name.replace("-", "_")
return self.service_enable_rcconf()
def get_service_status(self):
self.svc_cmd = "%s" % self.svc_initscript
rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'onestatus'))
if rc == 1:
self.running = False
elif rc == 0:
self.running = True
def service_control(self):
if self.action == "start":
self.action = "onestart"
if self.action == "stop":
self.action = "onestop"
self.svc_cmd = "%s" % self.svc_initscript
return self.execute_command("%s %s" % (self.svc_cmd, self.action), daemonize=True)
class SunOSService(Service):
"""
This is the SunOS Service manipulation class - it uses the svcadm
command for controlling services, and svcs command for checking status.
It also tries to be smart about taking the service out of maintenance
state if necessary.
"""
platform = 'SunOS'
distribution = None
def get_service_tools(self):
self.svcs_cmd = self.module.get_bin_path('svcs', True)
if not self.svcs_cmd:
self.module.fail_json(msg='unable to find svcs binary')
self.svcadm_cmd = self.module.get_bin_path('svcadm', True)
if not self.svcadm_cmd:
self.module.fail_json(msg='unable to find svcadm binary')
if self.svcadm_supports_sync():
self.svcadm_sync = '-s'
else:
self.svcadm_sync = ''
def svcadm_supports_sync(self):
# Support for synchronous restart/refresh is only supported on
# Oracle Solaris >= 11.2
for line in open('/etc/release', 'r').readlines():
m = re.match(r'\s+Oracle Solaris (\d+)\.(\d+).*', line.rstrip())
if m and m.groups() >= ('11', '2'):
return True
def get_service_status(self):
status = self.get_sunos_svcs_status()
# Only 'online' is considered properly running. Everything else is off
# or has some sort of problem.
if status == 'online':
self.running = True
else:
self.running = False
def get_sunos_svcs_status(self):
rc, stdout, stderr = self.execute_command("%s %s" % (self.svcs_cmd, self.name))
if rc == 1:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
lines = stdout.rstrip("\n").split("\n")
status = lines[-1].split(" ")[0]
# status is one of: online, offline, degraded, disabled, maintenance, uninitialized
# see man svcs(1)
return status
def service_enable(self):
# Get current service enablement status
rc, stdout, stderr = self.execute_command("%s -l %s" % (self.svcs_cmd, self.name))
if rc != 0:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
enabled = False
temporary = False
# look for enabled line, which could be one of:
# enabled true (temporary)
# enabled false (temporary)
# enabled true
# enabled false
for line in stdout.split("\n"):
if line.startswith("enabled"):
if "true" in line:
enabled = True
if "temporary" in line:
temporary = True
startup_enabled = (enabled and not temporary) or (not enabled and temporary)
if self.enable and startup_enabled:
return
elif (not self.enable) and (not startup_enabled):
return
if not self.module.check_mode:
# Mark service as started or stopped (this will have the side effect of
# actually stopping or starting the service)
if self.enable:
subcmd = "enable -rs"
else:
subcmd = "disable -s"
rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name))
if rc != 0:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
self.changed = True
def service_control(self):
status = self.get_sunos_svcs_status()
# if starting or reloading, clear maintenance states
if self.action in ['start', 'reload', 'restart'] and status in ['maintenance', 'degraded']:
rc, stdout, stderr = self.execute_command("%s clear %s" % (self.svcadm_cmd, self.name))
if rc != 0:
return rc, stdout, stderr
status = self.get_sunos_svcs_status()
if status in ['maintenance', 'degraded']:
self.module.fail_json(msg="Failed to bring service out of %s status." % status)
if self.action == 'start':
subcmd = "enable -rst"
elif self.action == 'stop':
subcmd = "disable -st"
elif self.action == 'reload':
subcmd = "refresh %s" % (self.svcadm_sync)
elif self.action == 'restart' and status == 'online':
subcmd = "restart %s" % (self.svcadm_sync)
elif self.action == 'restart' and status != 'online':
subcmd = "enable -rst"
return self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name))
class AIX(Service):
"""
This is the AIX Service (SRC) manipulation class - it uses lssrc, startsrc, stopsrc
and refresh for service control. Enabling a service is currently not supported.
Would require to add an entry in the /etc/inittab file (mkitab, chitab and rmitab
commands)
"""
platform = 'AIX'
distribution = None
def get_service_tools(self):
self.lssrc_cmd = self.module.get_bin_path('lssrc', True)
if not self.lssrc_cmd:
self.module.fail_json(msg='unable to find lssrc binary')
self.startsrc_cmd = self.module.get_bin_path('startsrc', True)
if not self.startsrc_cmd:
self.module.fail_json(msg='unable to find startsrc binary')
self.stopsrc_cmd = self.module.get_bin_path('stopsrc', True)
if not self.stopsrc_cmd:
self.module.fail_json(msg='unable to find stopsrc binary')
self.refresh_cmd = self.module.get_bin_path('refresh', True)
if not self.refresh_cmd:
self.module.fail_json(msg='unable to find refresh binary')
def get_service_status(self):
status = self.get_aix_src_status()
# Only 'active' is considered properly running. Everything else is off
# or has some sort of problem.
if status == 'active':
self.running = True
else:
self.running = False
def get_aix_src_status(self):
# Check subsystem status
rc, stdout, stderr = self.execute_command("%s -s %s" % (self.lssrc_cmd, self.name))
if rc == 1:
# If check for subsystem is not ok, check if service name is a
# group subsystem
rc, stdout, stderr = self.execute_command("%s -g %s" % (self.lssrc_cmd, self.name))
if rc == 1:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
else:
# Check all subsystem status, if one subsystem is not active
# the group is considered not active.
lines = stdout.splitlines()
for state in lines[1:]:
if state.split()[-1].strip() != "active":
status = state.split()[-1].strip()
break
else:
status = "active"
# status is one of: active, inoperative
return status
else:
lines = stdout.rstrip("\n").split("\n")
status = lines[-1].split(" ")[-1]
# status is one of: active, inoperative
return status
def service_control(self):
# Check if service name is a subsystem of a group subsystem
rc, stdout, stderr = self.execute_command("%s -a" % (self.lssrc_cmd))
if rc == 1:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
else:
lines = stdout.splitlines()
subsystems = []
groups = []
for line in lines[1:]:
subsystem = line.split()[0].strip()
group = line.split()[1].strip()
subsystems.append(subsystem)
if group:
groups.append(group)
# Define if service name parameter:
# -s subsystem or -g group subsystem
if self.name in subsystems:
srccmd_parameter = "-s"
elif self.name in groups:
srccmd_parameter = "-g"
if self.action == 'start':
srccmd = self.startsrc_cmd
elif self.action == 'stop':
srccmd = self.stopsrc_cmd
elif self.action == 'reload':
srccmd = self.refresh_cmd
elif self.action == 'restart':
self.execute_command("%s %s %s" % (self.stopsrc_cmd, srccmd_parameter, self.name))
if self.sleep:
time.sleep(self.sleep)
srccmd = self.startsrc_cmd
if self.arguments and self.action in ('start', 'restart'):
return self.execute_command("%s -a \"%s\" %s %s" % (srccmd, self.arguments, srccmd_parameter, self.name))
else:
return self.execute_command("%s %s %s" % (srccmd, srccmd_parameter, self.name))
# ===========================================
# Main control flow
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
state=dict(type='str', choices=['started', 'stopped', 'reloaded', 'restarted']),
sleep=dict(type='int'),
pattern=dict(type='str'),
enabled=dict(type='bool'),
runlevel=dict(type='str', default='default'),
arguments=dict(type='str', default='', aliases=['args']),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled']],
)
service = Service(module)
module.debug('Service instantiated - platform %s' % service.platform)
if service.distribution:
module.debug('Service instantiated - distribution %s' % service.distribution)
rc = 0
out = ''
err = ''
result = {}
result['name'] = service.name
# Find service management tools
service.get_service_tools()
# Enable/disable service startup at boot if requested
if service.module.params['enabled'] is not None:
# FIXME: ideally this should detect if we need to toggle the enablement state, though
# it's unlikely the changed handler would need to fire in this case so it's a minor thing.
service.service_enable()
result['enabled'] = service.enable
if module.params['state'] is None:
# Not changing the running state, so bail out now.
result['changed'] = service.changed
module.exit_json(**result)
result['state'] = service.state
# Collect service status
if service.pattern:
service.check_ps()
else:
service.get_service_status()
# Calculate if request will change service state
service.check_service_changed()
# Modify service state if necessary
(rc, out, err) = service.modify_service_state()
if rc != 0:
if err and "Job is already running" in err:
# upstart got confused, one such possibility is MySQL on Ubuntu 12.04
# where status may report it has no start/stop links and we could
# not get accurate status
pass
else:
if err:
module.fail_json(msg=err)
else:
module.fail_json(msg=out)
result['changed'] = service.changed | service.svc_change
if service.module.params['enabled'] is not None:
result['enabled'] = service.module.params['enabled']
if not service.module.params['state']:
status = service.get_service_status()
if status is None:
result['state'] = 'absent'
elif status is False:
result['state'] = 'started'
else:
result['state'] = 'stopped'
else:
# as we may have just bounced the service the service command may not
# report accurate state at this moment so just show what we ran
if service.module.params['state'] in ['reloaded', 'restarted', 'started']:
result['state'] = 'started'
else:
result['state'] = 'stopped'
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,975 |
inconsistency between systemd checking with service_facts and service modules
|
### Summary
There is some inconsistent code for checking if systemd exists between the service module and the service_facts module:
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service.py#L480
and
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/service_facts.py#L246
This causes some odd behavior when using the docker systemctl replacement script: https://github.com/gdraheim/docker-systemctl-replacement
The advice when using this script is to create the canary directory /run/systemd/system but when using service_facts module no service facts are collected as the canary folder check is not present so it goes on to inspect the contents of /proc/1/comm, which in this situation is "systemctl" instead of "systemd".
Ideally both modules would use the exact same code to check for systemd. And to help make this work with the docker systemctl replacement script it would be great if the canary folder check could be added to service_facts module.
Note, docker-systemctl-replacement is a recommendation for using systemctl commands inside containers such as for molecule testing as systemd inside containers are difficult to configure correctly (I can attest to this!). We use molecule to validate our AWS AMI packer builds which use ansible, since we are creating machine images we have to interact with systemctl for our unit tests to verify as service was installed and running etc.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/service_facts.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/.local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Dec 21 2022, 10:57:18) [GCC 8.5.0 20210514 (Red Hat 8.5.0-17)] (/usr/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.pvault
```
### OS / Environment
# cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
create a container that uses the docker-systemctl-replacement script:
```
# wget https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py
# then add this to Dockerfile
COPY systemctl.py /usr/bin/systemctl
RUN chmod 755 /usr/bin/systemctl
RUN mkdir /run/systemd/system/
CMD ["/usr/bin/systemctl"]
```
Build container and run, /proc/1/comm is now systemctl instead of systemd but the canary dir /run/systemd/system is present per https://github.com/gdraheim/docker-systemctl-replacement/blob/master/SERVICE-MANAGER.md
from within the container try to use the service_facts module:
```
- name: "Collect facts about system services."
service_facts:
register: services_state
- debug:
msg:
"service_facts: ": "{{ services_state }}"
```
output:
```
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
### Expected Results
If I add the check for the canary folders to service_facts.py I get the expected results:
```
TASK [Collect facts about system services.] ************************************
ok: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"ansible_facts": {
"services": {
"README.service": {
"name": "README.service",
"source": "systemd",
"state": "stopped",
"status": "disabled"
},
"amazon-cloudwatch-agent.service": {
"name": "amazon-cloudwatch-agent.service",
"source": "systemd",
"state": "stopped",
"status": "enabled"
},
<snip>
```
### Actual Results
```console
TASK [Collect facts about system services.] ************************************
skipping: [aws-amzn2-gold-ami]
TASK [debug] *******************************************************************
ok: [aws-amzn2-gold-ami] => {
"msg": {
"service_facts: ": {
"changed": false,
"failed": false,
"msg": "Failed to find any services. This can be due to privileges or some other configuration issue.",
"skipped": true
}
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80975
|
https://github.com/ansible/ansible/pull/81809
|
bf29458726496ee759f515cefe9e91fc26a533bd
|
e8ef6b7d7c6fb0ee2b08107f2a79ed747c56b86b
| 2023-06-05T21:03:11Z |
python
| 2023-10-26T02:09:46Z |
lib/ansible/modules/service_facts.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# originally copied from AWX's scan_services module to bring this functionality
# into Core
from __future__ import annotations
DOCUMENTATION = r'''
---
module: service_facts
short_description: Return service state information as fact data
description:
- Return service state information as fact data for various service management utilities.
version_added: "2.5"
requirements: ["Any of the following supported init systems: systemd, sysv, upstart, openrc, AIX SRC"]
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: none
facts:
support: full
platform:
platforms: posix
notes:
- When accessing the RV(ansible_facts.services) facts collected by this module,
it is recommended to not use "dot notation" because services can have a C(-)
character in their name which would result in invalid "dot notation", such as
C(ansible_facts.services.zuul-gateway). It is instead recommended to
using the string value of the service name as the key in order to obtain
the fact data value like C(ansible_facts.services['zuul-gateway'])
- AIX SRC was added in version 2.11.
author:
- Adam Miller (@maxamillion)
'''
EXAMPLES = r'''
- name: Populate service facts
ansible.builtin.service_facts:
- name: Print service facts
ansible.builtin.debug:
var: ansible_facts.services
'''
RETURN = r'''
ansible_facts:
description: Facts to add to ansible_facts about the services on the system
returned: always
type: complex
contains:
services:
description: States of the services with service name as key.
returned: always
type: list
elements: dict
contains:
source:
description:
- Init system of the service.
- One of V(rcctl), V(systemd), V(sysv), V(upstart), V(src).
returned: always
type: str
sample: sysv
state:
description:
- State of the service.
- 'This commonly includes (but is not limited to) the following: V(failed), V(running), V(stopped) or V(unknown).'
- Depending on the used init system additional states might be returned.
returned: always
type: str
sample: running
status:
description:
- State of the service.
- Either V(enabled), V(disabled), V(static), V(indirect) or V(unknown).
returned: systemd systems or RedHat/SUSE flavored sysvinit/upstart or OpenBSD
type: str
sample: enabled
name:
description: Name of the service.
returned: always
type: str
sample: arp-ethers.service
'''
import os
import platform
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
class BaseService(object):
def __init__(self, module):
self.module = module
class ServiceScanService(BaseService):
def _list_sysvinit(self, services):
rc, stdout, stderr = self.module.run_command("%s --status-all" % self.service_path)
if rc == 4 and not os.path.exists('/etc/init.d'):
# This function is not intended to run on Red Hat but it could happen
# if `chkconfig` is not installed. `service` on RHEL9 returns rc 4
# when /etc/init.d is missing, add the extra guard of checking /etc/init.d
# instead of solely relying on rc == 4
return
if rc != 0:
self.module.warn("Unable to query 'service' tool (%s): %s" % (rc, stderr))
p = re.compile(r'^\s*\[ (?P<state>\+|\-) \]\s+(?P<name>.+)$', flags=re.M)
for match in p.finditer(stdout):
service_name = match.group('name')
if match.group('state') == "+":
service_state = "running"
else:
service_state = "stopped"
services[service_name] = {"name": service_name, "state": service_state, "source": "sysv"}
def _list_upstart(self, services):
p = re.compile(r'^\s?(?P<name>.*)\s(?P<goal>\w+)\/(?P<state>\w+)(\,\sprocess\s(?P<pid>[0-9]+))?\s*$')
rc, stdout, stderr = self.module.run_command("%s list" % self.initctl_path)
if rc != 0:
self.module.warn('Unable to query upstart for service data: %s' % stderr)
else:
real_stdout = stdout.replace("\r", "")
for line in real_stdout.split("\n"):
m = p.match(line)
if not m:
continue
service_name = m.group('name')
service_goal = m.group('goal')
service_state = m.group('state')
if m.group('pid'):
pid = m.group('pid')
else:
pid = None # NOQA
payload = {"name": service_name, "state": service_state, "goal": service_goal, "source": "upstart"}
services[service_name] = payload
def _list_rh(self, services):
p = re.compile(
r'(?P<service>.*?)\s+[0-9]:(?P<rl0>on|off)\s+[0-9]:(?P<rl1>on|off)\s+[0-9]:(?P<rl2>on|off)\s+'
r'[0-9]:(?P<rl3>on|off)\s+[0-9]:(?P<rl4>on|off)\s+[0-9]:(?P<rl5>on|off)\s+[0-9]:(?P<rl6>on|off)')
rc, stdout, stderr = self.module.run_command('%s' % self.chkconfig_path, use_unsafe_shell=True)
# Check for special cases where stdout does not fit pattern
match_any = False
for line in stdout.split('\n'):
if p.match(line):
match_any = True
if not match_any:
p_simple = re.compile(r'(?P<service>.*?)\s+(?P<rl0>on|off)')
match_any = False
for line in stdout.split('\n'):
if p_simple.match(line):
match_any = True
if match_any:
# Try extra flags " -l --allservices" needed for SLES11
rc, stdout, stderr = self.module.run_command('%s -l --allservices' % self.chkconfig_path, use_unsafe_shell=True)
elif '--list' in stderr:
# Extra flag needed for RHEL5
rc, stdout, stderr = self.module.run_command('%s --list' % self.chkconfig_path, use_unsafe_shell=True)
for line in stdout.split('\n'):
m = p.match(line)
if m:
service_name = m.group('service')
service_state = 'stopped'
service_status = "disabled"
if m.group('rl3') == 'on':
service_status = "enabled"
rc, stdout, stderr = self.module.run_command('%s %s status' % (self.service_path, service_name), use_unsafe_shell=True)
service_state = rc
if rc in (0,):
service_state = 'running'
# elif rc in (1,3):
else:
output = stderr.lower()
for x in ('root', 'permission', 'not in sudoers'):
if x in output:
self.module.warn('Insufficient permissions to query sysV service "%s" and their states' % service_name)
break
else:
service_state = 'stopped'
service_data = {"name": service_name, "state": service_state, "status": service_status, "source": "sysv"}
services[service_name] = service_data
def _list_openrc(self, services):
all_services_runlevels = {}
rc, stdout, stderr = self.module.run_command("%s -a -s -m 2>&1 | grep '^ ' | tr -d '[]'" % self.rc_status_path, use_unsafe_shell=True)
rc_u, stdout_u, stderr_u = self.module.run_command("%s show -v 2>&1 | grep '|'" % self.rc_update_path, use_unsafe_shell=True)
for line in stdout_u.split('\n'):
line_data = line.split('|')
if len(line_data) < 2:
continue
service_name = line_data[0].strip()
runlevels = line_data[1].strip()
if not runlevels:
all_services_runlevels[service_name] = None
else:
all_services_runlevels[service_name] = runlevels.split()
for line in stdout.split('\n'):
line_data = line.split()
if len(line_data) < 2:
continue
service_name = line_data[0]
service_state = line_data[1]
service_runlevels = all_services_runlevels[service_name]
service_data = {"name": service_name, "runlevels": service_runlevels, "state": service_state, "source": "openrc"}
services[service_name] = service_data
def gather_services(self):
services = {}
# find cli tools if available
self.service_path = self.module.get_bin_path("service")
self.chkconfig_path = self.module.get_bin_path("chkconfig")
self.initctl_path = self.module.get_bin_path("initctl")
self.rc_status_path = self.module.get_bin_path("rc-status")
self.rc_update_path = self.module.get_bin_path("rc-update")
# TODO: review conditionals ... they should not be this 'exclusive'
if self.service_path and self.chkconfig_path is None and self.rc_status_path is None:
self._list_sysvinit(services)
if self.initctl_path and self.chkconfig_path is None:
self._list_upstart(services)
elif self.chkconfig_path:
self._list_rh(services)
elif self.rc_status_path is not None and self.rc_update_path is not None:
self._list_openrc(services)
return services
class SystemctlScanService(BaseService):
BAD_STATES = frozenset(['not-found', 'masked', 'failed'])
def systemd_enabled(self):
# Check if init is the systemd command, using comm as cmdline could be symlink
try:
f = open('/proc/1/comm', 'r')
except IOError:
# If comm doesn't exist, old kernel, no systemd
return False
for line in f:
if 'systemd' in line:
return True
return False
def _list_from_units(self, systemctl_path, services):
# list units as systemd sees them
rc, stdout, stderr = self.module.run_command("%s list-units --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True)
if rc != 0:
self.module.warn("Could not list units from systemd: %s" % stderr)
else:
for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line]:
state_val = "stopped"
status_val = "unknown"
fields = line.split()
for bad in self.BAD_STATES:
if bad in fields: # dot is 0
status_val = bad
fields = fields[1:]
break
else:
# active/inactive
status_val = fields[2]
# array is normalize so predictable now
service_name = fields[0]
if fields[3] == "running":
state_val = "running"
services[service_name] = {"name": service_name, "state": state_val, "status": status_val, "source": "systemd"}
def _list_from_unit_files(self, systemctl_path, services):
# now try unit files for complete picture and final 'status'
rc, stdout, stderr = self.module.run_command("%s list-unit-files --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True)
if rc != 0:
self.module.warn("Could not get unit files data from systemd: %s" % stderr)
else:
for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line]:
# there is one more column (VENDOR PRESET) from `systemctl list-unit-files` for systemd >= 245
try:
service_name, status_val = line.split()[:2]
except IndexError:
self.module.fail_json(msg="Malformed output discovered from systemd list-unit-files: {0}".format(line))
if service_name not in services:
rc, stdout, stderr = self.module.run_command("%s show %s --property=ActiveState" % (systemctl_path, service_name), use_unsafe_shell=True)
state = 'unknown'
if not rc and stdout != '':
state = stdout.replace('ActiveState=', '').rstrip()
services[service_name] = {"name": service_name, "state": state, "status": status_val, "source": "systemd"}
elif services[service_name]["status"] not in self.BAD_STATES:
services[service_name]["status"] = status_val
def gather_services(self):
services = {}
if self.systemd_enabled():
systemctl_path = self.module.get_bin_path("systemctl", opt_dirs=["/usr/bin", "/usr/local/bin"])
if systemctl_path:
self._list_from_units(systemctl_path, services)
self._list_from_unit_files(systemctl_path, services)
return services
class AIXScanService(BaseService):
def gather_services(self):
services = {}
if platform.system() == 'AIX':
lssrc_path = self.module.get_bin_path("lssrc")
if lssrc_path:
rc, stdout, stderr = self.module.run_command("%s -a" % lssrc_path)
if rc != 0:
self.module.warn("lssrc could not retrieve service data (%s): %s" % (rc, stderr))
else:
for line in stdout.split('\n'):
line_data = line.split()
if len(line_data) < 2:
continue # Skipping because we expected more data
if line_data[0] == "Subsystem":
continue # Skip header
service_name = line_data[0]
if line_data[-1] == "active":
service_state = "running"
elif line_data[-1] == "inoperative":
service_state = "stopped"
else:
service_state = "unknown"
services[service_name] = {"name": service_name, "state": service_state, "source": "src"}
return services
class OpenBSDScanService(BaseService):
def query_rcctl(self, cmd):
svcs = []
rc, stdout, stderr = self.module.run_command("%s ls %s" % (self.rcctl_path, cmd))
if 'needs root privileges' in stderr.lower():
self.module.warn('rcctl requires root privileges')
else:
for svc in stdout.split('\n'):
if svc == '':
continue
else:
svcs.append(svc)
return svcs
def get_info(self, name):
info = {}
rc, stdout, stderr = self.module.run_command("%s get %s" % (self.rcctl_path, name))
if 'needs root privileges' in stderr.lower():
self.module.warn('rcctl requires root privileges')
else:
undy = '%s_' % name
for variable in stdout.split('\n'):
if variable == '' or '=' not in variable:
continue
else:
k, v = variable.replace(undy, '', 1).split('=')
info[k] = v
return info
def gather_services(self):
services = {}
self.rcctl_path = self.module.get_bin_path("rcctl")
if self.rcctl_path:
# populate services will all possible
for svc in self.query_rcctl('all'):
services[svc] = {'name': svc, 'source': 'rcctl', 'rogue': False}
services[svc].update(self.get_info(svc))
for svc in self.query_rcctl('on'):
services[svc].update({'status': 'enabled'})
for svc in self.query_rcctl('started'):
services[svc].update({'state': 'running'})
# Override the state for services which are marked as 'failed'
for svc in self.query_rcctl('failed'):
services[svc].update({'state': 'failed'})
for svc in services.keys():
# Based on the list of services that are enabled/failed, determine which are disabled
if services[svc].get('status') is None:
services[svc].update({'status': 'disabled'})
# and do the same for those are aren't running
if services[svc].get('state') is None:
services[svc].update({'state': 'stopped'})
for svc in self.query_rcctl('rogue'):
services[svc]['rogue'] = True
return services
def main():
module = AnsibleModule(argument_spec=dict(), supports_check_mode=True)
locale = get_best_parsable_locale(module)
module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale)
service_modules = (ServiceScanService, SystemctlScanService, AIXScanService, OpenBSDScanService)
all_services = {}
for svc_module in service_modules:
svcmod = svc_module(module)
svc = svcmod.gather_services()
if svc:
all_services.update(svc)
if len(all_services) == 0:
results = dict(skipped=True, msg="Failed to find any services. This can be due to privileges or some other configuration issue.")
else:
results = dict(ansible_facts=dict(services=all_services))
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,826 |
ansible_reboot_pending always reports null on Windows Server
|
### Summary
On Windows the setup module sets the variable ansible_reboot_pending using the function Get-PendingRebootStatus in module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1
The $featuredata variable is only populated on Windows Server where the root\microsoft\windows\servermanager CIM class exists
if(($featureData -and $featureData.RequiresReboot) -or $regData -or $CBSRebootStatus) is then subsequently evaluated but
Being an array $featureData has no "RequiresReboot" property:
```
PS C:\Users\c.mammoli.adm> $featureData.GetType()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
PS C:\Users\c.mammoli.adm> $featureData.RequiresReboot
The property 'RequiresReboot' cannot be found on this object. Verify that the property exists.
At line:1 char:1
+ $featureData.RequiresReboot
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], PropertyNotFoundException
+ FullyQualifiedErrorId : PropertyNotFoundStrict
```
Being run with "Set-StrictMode -Version 2.0" the function errors out and the setup module returns $null
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1
lib/ansible/modules/setup.py
### Ansible Version
```console
ansible 2.9.27
config file = /home/c.mammoli/.ansible.cfg
configured module search path = ['/home/c.mammoli/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/c.mammoli/.local/lib/python3.9/site-packages/ansible
executable location = /home/c.mammoli/.local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
ANSIBLE_PIPELINING(/home/c.mammoli/.ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH_DIR(/home/c.mammoli/.ansible.cfg) = /dev/shm/ansible_control_path
CACHE_PLUGIN(/home/c.mammoli/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/c.mammoli/.ansible.cfg) = ~/.ansible_fact_cache.json
DEFAULT_GATHERING(/home/c.mammoli/.ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/c.mammoli/.ansible.cfg) = ['/home/c.mammoli/devel/ansible/inventory_netbox.yml']
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/c.mammoli/.ansible.cfg) = True
DEFAULT_LOOKUP_PLUGIN_PATH(/home/c.mammoli/.ansible.cfg) = ['/home/c.mammoli/devel/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(/home/c.mammoli/.ansible.cfg) = ['/home/c.mammoli/devel/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/c.mammoli/.ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/c.mammoli/.ansible.cfg) = False
INVENTORY_ENABLED(/home/c.mammoli/.ansible.cfg) = ['yaml', 'ini', 'tower', 'netbox.netbox.nb_inventory', 'script']
RETRY_FILES_ENABLED(/home/c.mammoli/.ansible.cfg) = True
RETRY_FILES_SAVE_PATH(/home/c.mammoli/.ansible.cfg) = /home/c.mammoli/.ansible-retry
c.mammoli ~/devel/ansible $
```
### OS / Environment
Rocky 8
### Steps to Reproduce
```---
- hosts: windows_server_host
gather_facts: true
tasks:
- name: run the Function Get-PendingRebootStatus function from module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 without StrictMode
win_shell: |
Function Get-PendingRebootStatus
{
<#
.SYNOPSIS
Check if reboot is required, if so notify CA.
Function returns true if computer has a pending reboot
#>
$featureData = Invoke-CimMethod -EA Ignore -Name GetServerFeature -Namespace root\microsoft\windows\servermanager -Class MSFT_ServerManagerTasks
$regData = Get-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager" "PendingFileRenameOperations" -EA Ignore
$CBSRebootStatus = Get-ChildItem "HKLM:\\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing" -ErrorAction SilentlyContinue| Where-Object {$_.PSChildName -eq "RebootPending"}
if(($featureData -and $featureData.RequiresReboot) -or $regData -or $CBSRebootStatus)
{
return $True
} else {
return $False
}
}
Get-PendingRebootStatus
register: PendingRebootStatus
- name: reboot pending according to the function
debug:
var: PendingRebootStatus.stdout
- name: reboot pending according to the setup module
debug:
var: ansible_reboot_pending
```
### Expected Results
ansible_reboot_pending is set to either true or false
### Actual Results
```console
ansible_reboot_pending is set to null
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76826
|
https://github.com/ansible/ansible/pull/82076
|
fb8ede22e1641c0df37a31cba569841fdcc529c3
|
f5d7dc1a97642e26dcc5873388642d84340b642e
| 2022-01-24T11:06:24Z |
python
| 2023-10-26T19:16:05Z |
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1
|
# Copyright (c), Michael DeHaan <[email protected]>, 2014, and others
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
Set-StrictMode -Version 2.0
$ErrorActionPreference = "Stop"
Function Set-Attr($obj, $name, $value) {
<#
.SYNOPSIS
Helper function to set an "attribute" on a psobject instance in PowerShell.
This is a convenience to make adding Members to the object easier and
slightly more pythonic
.EXAMPLE
Set-Attr $result "changed" $true
#>
# If the provided $obj is undefined, define one to be nice
If (-not $obj.GetType) {
$obj = @{ }
}
Try {
$obj.$name = $value
}
Catch {
$obj | Add-Member -Force -MemberType NoteProperty -Name $name -Value $value
}
}
Function Exit-Json($obj) {
<#
.SYNOPSIS
Helper function to convert a PowerShell object to JSON and output it, exiting
the script
.EXAMPLE
Exit-Json $result
#>
# If the provided $obj is undefined, define one to be nice
If (-not $obj.GetType) {
$obj = @{ }
}
if (-not $obj.ContainsKey('changed')) {
Set-Attr -obj $obj -name "changed" -value $false
}
Write-Output $obj | ConvertTo-Json -Compress -Depth 99
Exit
}
Function Fail-Json($obj, $message = $null) {
<#
.SYNOPSIS
Helper function to add the "msg" property and "failed" property, convert the
PowerShell Hashtable to JSON and output it, exiting the script
.EXAMPLE
Fail-Json $result "This is the failure message"
#>
if ($obj -is [hashtable] -or $obj -is [psobject]) {
# Nothing to do
}
elseif ($obj -is [string] -and $null -eq $message) {
# If we weren't given 2 args, and the only arg was a string,
# create a new Hashtable and use the arg as the failure message
$message = $obj
$obj = @{ }
}
else {
# If the first argument is undefined or a different type,
# make it a Hashtable
$obj = @{ }
}
# Still using Set-Attr for PSObject compatibility
Set-Attr -obj $obj -name "msg" -value $message
Set-Attr -obj $obj -name "failed" -value $true
if (-not $obj.ContainsKey('changed')) {
Set-Attr -obj $obj -name "changed" -value $false
}
Write-Output $obj | ConvertTo-Json -Compress -Depth 99
Exit 1
}
Function Add-Warning($obj, $message) {
<#
.SYNOPSIS
Helper function to add warnings, even if the warnings attribute was
not already set up. This is a convenience for the module developer
so they do not have to check for the attribute prior to adding.
#>
if (-not $obj.ContainsKey("warnings")) {
$obj.warnings = @()
}
elseif ($obj.warnings -isnot [array]) {
throw "Add-Warning: warnings attribute is not an array"
}
$obj.warnings += $message
}
Function Add-DeprecationWarning($obj, $message, $version = $null) {
<#
.SYNOPSIS
Helper function to add deprecations, even if the deprecations attribute was
not already set up. This is a convenience for the module developer
so they do not have to check for the attribute prior to adding.
#>
if (-not $obj.ContainsKey("deprecations")) {
$obj.deprecations = @()
}
elseif ($obj.deprecations -isnot [array]) {
throw "Add-DeprecationWarning: deprecations attribute is not a list"
}
$obj.deprecations += @{
msg = $message
version = $version
}
}
Function Expand-Environment($value) {
<#
.SYNOPSIS
Helper function to expand environment variables in values. By default
it turns any type to a string, but we ensure $null remains $null.
#>
if ($null -ne $value) {
[System.Environment]::ExpandEnvironmentVariables($value)
}
else {
$value
}
}
Function Get-AnsibleParam {
<#
.SYNOPSIS
Helper function to get an "attribute" from a psobject instance in PowerShell.
This is a convenience to make getting Members from an object easier and
slightly more pythonic
.EXAMPLE
$attr = Get-AnsibleParam $response "code" -default "1"
.EXAMPLE
Get-AnsibleParam -obj $params -name "State" -default "Present" -ValidateSet "Present","Absent" -resultobj $resultobj -failifempty $true
Get-AnsibleParam also supports Parameter validation to save you from coding that manually
Note that if you use the failifempty option, you do need to specify resultobject as well.
#>
param (
$obj,
$name,
$default = $null,
$resultobj = @{},
$failifempty = $false,
$emptyattributefailmessage,
$ValidateSet,
$ValidateSetErrorMessage,
$type = $null,
$aliases = @()
)
# Check if the provided Member $name or aliases exist in $obj and return it or the default.
try {
$found = $null
# First try to find preferred parameter $name
$aliases = @($name) + $aliases
# Iterate over aliases to find acceptable Member $name
foreach ($alias in $aliases) {
if ($obj.ContainsKey($alias)) {
$found = $alias
break
}
}
if ($null -eq $found) {
throw
}
$name = $found
if ($ValidateSet) {
if ($ValidateSet -contains ($obj.$name)) {
$value = $obj.$name
}
else {
if ($null -eq $ValidateSetErrorMessage) {
#Auto-generated error should be sufficient in most use cases
$ValidateSetErrorMessage = "Get-AnsibleParam: Argument $name needs to be one of $($ValidateSet -join ",") but was $($obj.$name)."
}
Fail-Json -obj $resultobj -message $ValidateSetErrorMessage
}
}
else {
$value = $obj.$name
}
}
catch {
if ($failifempty -eq $false) {
$value = $default
}
else {
if (-not $emptyattributefailmessage) {
$emptyattributefailmessage = "Get-AnsibleParam: Missing required argument: $name"
}
Fail-Json -obj $resultobj -message $emptyattributefailmessage
}
}
# If $null -eq $value, the parameter was unspecified by the user (deliberately or not)
# Please leave $null-values intact, modules need to know if a parameter was specified
if ($null -eq $value) {
return $null
}
if ($type -eq "path") {
# Expand environment variables on path-type
$value = Expand-Environment($value)
# Test if a valid path is provided
if (-not (Test-Path -IsValid $value)) {
$path_invalid = $true
# could still be a valid-shaped path with a nonexistent drive letter
if ($value -match "^\w:") {
# rewrite path with a valid drive letter and recheck the shape- this might still fail, eg, a nonexistent non-filesystem PS path
if (Test-Path -IsValid $(@(Get-PSDrive -PSProvider Filesystem)[0].Name + $value.Substring(1))) {
$path_invalid = $false
}
}
if ($path_invalid) {
Fail-Json -obj $resultobj -message "Get-AnsibleParam: Parameter '$name' has an invalid path '$value' specified."
}
}
}
elseif ($type -eq "str") {
# Convert str types to real Powershell strings
$value = $value.ToString()
}
elseif ($type -eq "bool") {
# Convert boolean types to real Powershell booleans
$value = $value | ConvertTo-Bool
}
elseif ($type -eq "int") {
# Convert int types to real Powershell integers
$value = $value -as [int]
}
elseif ($type -eq "float") {
# Convert float types to real Powershell floats
$value = $value -as [float]
}
elseif ($type -eq "list") {
if ($value -is [array]) {
# Nothing to do
}
elseif ($value -is [string]) {
# Convert string type to real Powershell array
$value = $value.Split(",").Trim()
}
elseif ($value -is [int]) {
$value = @($value)
}
else {
Fail-Json -obj $resultobj -message "Get-AnsibleParam: Parameter '$name' is not a YAML list."
}
# , is not a typo, forces it to return as a list when it is empty or only has 1 entry
return , $value
}
return $value
}
#Alias Get-attr-->Get-AnsibleParam for backwards compat. Only add when needed to ease debugging of scripts
If (-not(Get-Alias -Name "Get-attr" -ErrorAction SilentlyContinue)) {
New-Alias -Name Get-attr -Value Get-AnsibleParam
}
Function ConvertTo-Bool {
<#
.SYNOPSIS
Helper filter/pipeline function to convert a value to boolean following current
Ansible practices
.EXAMPLE
$is_true = "true" | ConvertTo-Bool
#>
param(
[parameter(valuefrompipeline = $true)]
$obj
)
process {
$boolean_strings = "yes", "on", "1", "true", 1
$obj_string = [string]$obj
if (($obj -is [boolean] -and $obj) -or $boolean_strings -contains $obj_string.ToLower()) {
return $true
}
else {
return $false
}
}
}
Function Parse-Args {
<#
.SYNOPSIS
Helper function to parse Ansible JSON arguments from a "file" passed as
the single argument to the module.
.EXAMPLE
$params = Parse-Args $args
#>
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSUseSingularNouns", "", Justification = "Cannot change the name now")]
param ($arguments, $supports_check_mode = $false)
$params = New-Object psobject
If ($arguments.Length -gt 0) {
$params = Get-Content $arguments[0] | ConvertFrom-Json
}
Else {
$params = $complex_args
}
$check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false
If ($check_mode -and -not $supports_check_mode) {
Exit-Json @{
skipped = $true
changed = $false
msg = "remote module does not support check mode"
}
}
return $params
}
Function Get-FileChecksum($path, $algorithm = 'sha1') {
<#
.SYNOPSIS
Helper function to calculate a hash of a file in a way which PowerShell 3
and above can handle
#>
If (Test-Path -LiteralPath $path -PathType Leaf) {
switch ($algorithm) {
'md5' { $sp = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider }
'sha1' { $sp = New-Object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider }
'sha256' { $sp = New-Object -TypeName System.Security.Cryptography.SHA256CryptoServiceProvider }
'sha384' { $sp = New-Object -TypeName System.Security.Cryptography.SHA384CryptoServiceProvider }
'sha512' { $sp = New-Object -TypeName System.Security.Cryptography.SHA512CryptoServiceProvider }
default { Fail-Json @{} "Unsupported hash algorithm supplied '$algorithm'" }
}
If ($PSVersionTable.PSVersion.Major -ge 4) {
$raw_hash = Get-FileHash -LiteralPath $path -Algorithm $algorithm
$hash = $raw_hash.Hash.ToLower()
}
Else {
$fp = [System.IO.File]::Open($path, [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::ReadWrite)
$hash = [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower()
$fp.Dispose()
}
}
ElseIf (Test-Path -LiteralPath $path -PathType Container) {
$hash = "3"
}
Else {
$hash = "1"
}
return $hash
}
Function Get-PendingRebootStatus {
<#
.SYNOPSIS
Check if reboot is required, if so notify CA.
Function returns true if computer has a pending reboot
#>
$featureData = Invoke-CimMethod -EA Ignore -Name GetServerFeature -Namespace root\microsoft\windows\servermanager -Class MSFT_ServerManagerTasks
$regData = Get-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager" "PendingFileRenameOperations" -EA Ignore
$CBSRebootStatus = Get-ChildItem "HKLM:\\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing" -ErrorAction SilentlyContinue |
Where-Object { $_.PSChildName -eq "RebootPending" }
if (($featureData -and $featureData.RequiresReboot) -or $regData -or $CBSRebootStatus) {
return $True
}
else {
return $False
}
}
# this line must stay at the bottom to ensure all defined module parts are exported
Export-ModuleMember -Alias * -Function * -Cmdlet *
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,782 |
get_url with client_key/client_cert fails with 403 forbidden on centos stream 8
|
### Summary
We have a web server that requires a client cert for access. We use get_url to retrieve a file with client_key/client_cert. This appears to be working everywhere except on my CentOS Stream 8 machine.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Aug 11 2023, 13:46:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
CentOS Stream 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible -m ansible.builtin.get_url -a 'url=https://microsoft.cora.nwra.com/keys/microsoft.asc dest=/etc/pki/rpm-gpg/microsoft.asc client_key=/etc/pki/tls/private/rufous.cora.nwra.com.key client_cert=/etc/pki/tls/certs/rufous.cora.nwra.com.crt mode="0644"'
```
### Expected Results
File is successfully downloaded. I works fine with curl:
```
# curl --cert /etc/pki/tls/certs/rufous.cora.nwra.com.crt --key /etc/pki/tls/private/rufous.cora.nwra.com.key https://microsoft.cora.nwra.com/keys/microsoft.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (GNU/Linux)
mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
...
```
### Actual Results
```console
ansible [core 2.15.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Aug 11 2023, 13:46:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720 `" && echo ansible-tmp-1695744556.844725-24991-30365752363720="` echo /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720 `" ) && sleep 0'
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/arg_spec.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/locale.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/errors.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/compat/typing.py
Using module file /usr/lib/python3.11/site-packages/ansible/modules/get_url.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-249871ktvk910/tmp73jre98z TO /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/ /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.11 /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/ > /dev/null 2>&1 && sleep 0'
localhost | FAILED! => {
"changed": false,
"dest": "/etc/pki/rpm-gpg/microsoft.asc",
"elapsed": 0,
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"attributes": null,
"backup": false,
"checksum": "",
"ciphers": null,
"client_cert": "/etc/pki/tls/certs/rufous.cora.nwra.com.crt",
"client_key": "/etc/pki/tls/private/rufous.cora.nwra.com.key",
"decompress": true,
"dest": "/etc/pki/rpm-gpg/microsoft.asc",
"force": false,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": "0644",
"owner": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"timeout": 10,
"tmp_dest": null,
"unredirected_headers": [],
"unsafe_writes": false,
"url": "https://microsoft.cora.nwra.com/keys/microsoft.asc",
"url_password": null,
"url_username": null,
"use_gssapi": false,
"use_netrc": true,
"use_proxy": true,
"validate_certs": true
}
},
"mode": "0644",
"msg": "Request failed",
"owner": "root",
"response": "HTTP Error 403: Forbidden",
"secontext": "system_u:object_r:cert_t:s0",
"size": 983,
"state": "file",
"status_code": 403,
"uid": 0,
"url": "https://microsoft.cora.nwra.com/keys/microsoft.asc"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81782
|
https://github.com/ansible/ansible/pull/82063
|
f5a0c0dfc8b1aa885536cc59d848698d28042ca3
|
b34f4a559ff3b4521313f5832f93806d1db853c8
| 2023-09-26T16:28:42Z |
python
| 2023-10-27T02:00:34Z |
changelogs/fragments/urls-tls13-post-handshake-auth.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,782 |
get_url with client_key/client_cert fails with 403 forbidden on centos stream 8
|
### Summary
We have a web server that requires a client cert for access. We use get_url to retrieve a file with client_key/client_cert. This appears to be working everywhere except on my CentOS Stream 8 machine.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Aug 11 2023, 13:46:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
CentOS Stream 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible -m ansible.builtin.get_url -a 'url=https://microsoft.cora.nwra.com/keys/microsoft.asc dest=/etc/pki/rpm-gpg/microsoft.asc client_key=/etc/pki/tls/private/rufous.cora.nwra.com.key client_cert=/etc/pki/tls/certs/rufous.cora.nwra.com.crt mode="0644"'
```
### Expected Results
File is successfully downloaded. I works fine with curl:
```
# curl --cert /etc/pki/tls/certs/rufous.cora.nwra.com.crt --key /etc/pki/tls/private/rufous.cora.nwra.com.key https://microsoft.cora.nwra.com/keys/microsoft.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (GNU/Linux)
mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
...
```
### Actual Results
```console
ansible [core 2.15.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Aug 11 2023, 13:46:19) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Attempting to use 'default' callback.
Skipping callback 'default', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'tree' callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720 `" && echo ansible-tmp-1695744556.844725-24991-30365752363720="` echo /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720 `" ) && sleep 0'
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/arg_spec.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/locale.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/compat/selinux.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/errors.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/compat/typing.py
Using module file /usr/lib/python3.11/site-packages/ansible/modules/get_url.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-249871ktvk910/tmp73jre98z TO /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/ /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.11 /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/AnsiballZ_get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1695744556.844725-24991-30365752363720/ > /dev/null 2>&1 && sleep 0'
localhost | FAILED! => {
"changed": false,
"dest": "/etc/pki/rpm-gpg/microsoft.asc",
"elapsed": 0,
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"attributes": null,
"backup": false,
"checksum": "",
"ciphers": null,
"client_cert": "/etc/pki/tls/certs/rufous.cora.nwra.com.crt",
"client_key": "/etc/pki/tls/private/rufous.cora.nwra.com.key",
"decompress": true,
"dest": "/etc/pki/rpm-gpg/microsoft.asc",
"force": false,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": "0644",
"owner": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"timeout": 10,
"tmp_dest": null,
"unredirected_headers": [],
"unsafe_writes": false,
"url": "https://microsoft.cora.nwra.com/keys/microsoft.asc",
"url_password": null,
"url_username": null,
"use_gssapi": false,
"use_netrc": true,
"use_proxy": true,
"validate_certs": true
}
},
"mode": "0644",
"msg": "Request failed",
"owner": "root",
"response": "HTTP Error 403: Forbidden",
"secontext": "system_u:object_r:cert_t:s0",
"size": 983,
"state": "file",
"status_code": 403,
"uid": 0,
"url": "https://microsoft.cora.nwra.com/keys/microsoft.asc"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81782
|
https://github.com/ansible/ansible/pull/82063
|
f5a0c0dfc8b1aa885536cc59d848698d28042ca3
|
b34f4a559ff3b4521313f5832f93806d1db853c8
| 2023-09-26T16:28:42Z |
python
| 2023-10-27T02:00:34Z |
lib/ansible/module_utils/urls.py
|
# -*- coding: utf-8 -*-
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]>, 2015
# Copyright: Contributors to the Ansible project
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
'''
The **urls** utils module offers a replacement for the urllib python library.
urllib is the python stdlib way to retrieve files from the Internet but it
lacks some security features (around verifying SSL certificates) that users
should care about in most situations. Using the functions in this module corrects
deficiencies in the urllib module wherever possible.
There are also third-party libraries (for instance, requests) which can be used
to replace urllib with a more secure library. However, all third party libraries
require that the library be installed on the managed machine. That is an extra step
for users making use of a module. If possible, avoid third party libraries by using
this code instead.
'''
from __future__ import annotations
import base64
import email.mime.application
import email.mime.multipart
import email.mime.nonmultipart
import email.parser
import email.policy
import email.utils
import http.client
import mimetypes
import netrc
import os
import platform
import re
import socket
import tempfile
import traceback
import types # pylint: disable=unused-import
import urllib.error
import urllib.request
from contextlib import contextmanager
from http import cookiejar
from urllib.parse import unquote, urlparse, urlunparse
from urllib.request import BaseHandler
try:
import gzip
HAS_GZIP = True
GZIP_IMP_ERR = None
except ImportError:
HAS_GZIP = False
GZIP_IMP_ERR = traceback.format_exc()
GzipFile = object
else:
GzipFile = gzip.GzipFile # type: ignore[assignment,misc]
from ansible.module_utils.basic import missing_required_lib
from ansible.module_utils.common.collections import Mapping, is_sequence
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
try:
import ssl
HAS_SSL = True
except Exception:
HAS_SSL = False
HAS_CRYPTOGRAPHY = True
try:
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.exceptions import UnsupportedAlgorithm
except ImportError:
HAS_CRYPTOGRAPHY = False
# Old import for GSSAPI authentication, this is not used in urls.py but kept for backwards compatibility.
try:
import urllib_gssapi # pylint: disable=unused-import
HAS_GSSAPI = True
except ImportError:
HAS_GSSAPI = False
GSSAPI_IMP_ERR = None
try:
import gssapi
class HTTPGSSAPIAuthHandler(BaseHandler):
""" Handles Negotiate/Kerberos support through the gssapi library. """
AUTH_HEADER_PATTERN = re.compile(r'(?:.*)\s*(Negotiate|Kerberos)\s*([^,]*),?', re.I)
handler_order = 480 # Handle before Digest authentication
def __init__(self, username=None, password=None):
self.username = username
self.password = password
self._context = None
def get_auth_value(self, headers):
auth_match = self.AUTH_HEADER_PATTERN.search(headers.get('www-authenticate', ''))
if auth_match:
return auth_match.group(1), base64.b64decode(auth_match.group(2))
def http_error_401(self, req, fp, code, msg, headers):
# If we've already attempted the auth and we've reached this again then there was a failure.
if self._context:
return
parsed = urlparse(req.get_full_url())
auth_header = self.get_auth_value(headers)
if not auth_header:
return
auth_protocol, in_token = auth_header
username = None
if self.username:
username = gssapi.Name(self.username, name_type=gssapi.NameType.user)
if username and self.password:
if not hasattr(gssapi.raw, 'acquire_cred_with_password'):
raise NotImplementedError("Platform GSSAPI library does not support "
"gss_acquire_cred_with_password, cannot acquire GSSAPI credential with "
"explicit username and password.")
b_password = to_bytes(self.password, errors='surrogate_or_strict')
cred = gssapi.raw.acquire_cred_with_password(username, b_password, usage='initiate').creds
else:
cred = gssapi.Credentials(name=username, usage='initiate')
# Get the peer certificate for the channel binding token if possible (HTTPS). A bug on macOS causes the
# authentication to fail when the CBT is present. Just skip that platform.
cbt = None
cert = getpeercert(fp, True)
if cert and platform.system() != 'Darwin':
cert_hash = get_channel_binding_cert_hash(cert)
if cert_hash:
cbt = gssapi.raw.ChannelBindings(application_data=b"tls-server-end-point:" + cert_hash)
# TODO: We could add another option that is set to include the port in the SPN if desired in the future.
target = gssapi.Name("HTTP@%s" % parsed.hostname, gssapi.NameType.hostbased_service)
self._context = gssapi.SecurityContext(usage="initiate", name=target, creds=cred, channel_bindings=cbt)
resp = None
while not self._context.complete:
out_token = self._context.step(in_token)
if not out_token:
break
auth_header = '%s %s' % (auth_protocol, to_native(base64.b64encode(out_token)))
req.add_unredirected_header('Authorization', auth_header)
resp = self.parent.open(req)
# The response could contain a token that the client uses to validate the server
auth_header = self.get_auth_value(resp.headers)
if not auth_header:
break
in_token = auth_header[1]
return resp
except ImportError:
GSSAPI_IMP_ERR = traceback.format_exc()
HTTPGSSAPIAuthHandler = None # type: types.ModuleType | None # type: ignore[no-redef]
PEM_CERT_RE = re.compile(
r'^-----BEGIN CERTIFICATE-----\n.+?-----END CERTIFICATE-----$',
flags=re.M | re.S
)
#
# Exceptions
#
class ConnectionError(Exception):
"""Failed to connect to the server"""
pass
class ProxyError(ConnectionError):
"""Failure to connect because of a proxy"""
pass
class SSLValidationError(ConnectionError):
"""Failure to connect due to SSL validation failing
No longer used, but kept for backwards compatibility
"""
pass
class NoSSLError(SSLValidationError):
"""Needed to connect to an HTTPS url but no ssl library available to verify the certificate
No longer used, but kept for backwards compatibility
"""
pass
class MissingModuleError(Exception):
"""Failed to import 3rd party module required by the caller"""
def __init__(self, message, import_traceback, module=None):
super().__init__(message)
self.import_traceback = import_traceback
self.module = module
UnixHTTPSHandler = None
UnixHTTPSConnection = None
if HAS_SSL:
@contextmanager
def unix_socket_patch_httpconnection_connect():
'''Monkey patch ``http.client.HTTPConnection.connect`` to be ``UnixHTTPConnection.connect``
so that when calling ``super(UnixHTTPSConnection, self).connect()`` we get the
correct behavior of creating self.sock for the unix socket
'''
_connect = http.client.HTTPConnection.connect
http.client.HTTPConnection.connect = UnixHTTPConnection.connect
yield
http.client.HTTPConnection.connect = _connect
class UnixHTTPSConnection(http.client.HTTPSConnection): # type: ignore[no-redef]
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
# This method exists simply to ensure we monkeypatch
# http.client.HTTPConnection.connect to call UnixHTTPConnection.connect
with unix_socket_patch_httpconnection_connect():
# Disable pylint check for the super() call. It complains about UnixHTTPSConnection
# being a NoneType because of the initial definition above, but it won't actually
# be a NoneType when this code runs
super().connect()
def __call__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
return self
class UnixHTTPSHandler(urllib.request.HTTPSHandler): # type: ignore[no-redef]
def __init__(self, unix_socket, **kwargs):
super().__init__(**kwargs)
self._unix_socket = unix_socket
def https_open(self, req):
kwargs = {}
try:
# deprecated: description='deprecated check_hostname' python_version='3.12'
kwargs['check_hostname'] = self._check_hostname
except AttributeError:
pass
return self.do_open(
UnixHTTPSConnection(self._unix_socket),
req,
context=self._context,
**kwargs
)
class UnixHTTPConnection(http.client.HTTPConnection):
'''Handles http requests to a unix socket file'''
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
self.sock.connect(self._unix_socket)
except OSError as e:
raise OSError('Invalid Socket File (%s): %s' % (self._unix_socket, e))
if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
self.sock.settimeout(self.timeout)
def __call__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
return self
class UnixHTTPHandler(urllib.request.HTTPHandler):
'''Handler for Unix urls'''
def __init__(self, unix_socket, **kwargs):
super().__init__(**kwargs)
self._unix_socket = unix_socket
def http_open(self, req):
return self.do_open(UnixHTTPConnection(self._unix_socket), req)
class ParseResultDottedDict(dict):
'''
A dict that acts similarly to the ParseResult named tuple from urllib
'''
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__dict__ = self
def as_list(self):
'''
Generate a list from this dict, that looks like the ParseResult named tuple
'''
return [self.get(k, None) for k in ('scheme', 'netloc', 'path', 'params', 'query', 'fragment')]
def generic_urlparse(parts):
'''
Returns a dictionary of url parts as parsed by urlparse,
but accounts for the fact that older versions of that
library do not support named attributes (ie. .netloc)
This method isn't of much use any longer, but is kept
in a minimal state for backwards compat.
'''
result = ParseResultDottedDict(parts._asdict())
result.update({
'username': parts.username,
'password': parts.password,
'hostname': parts.hostname,
'port': parts.port,
})
return result
def extract_pem_certs(data):
for match in PEM_CERT_RE.finditer(data):
yield match.group(0)
def get_response_filename(response):
url = response.geturl()
path = urlparse(url)[2]
filename = os.path.basename(path.rstrip('/')) or None
if filename:
filename = unquote(filename)
return response.headers.get_param('filename', header='content-disposition') or filename
def parse_content_type(response):
get_type = response.headers.get_content_type
get_param = response.headers.get_param
content_type = (get_type() or 'application/octet-stream').split(',')[0]
main_type, sub_type = content_type.split('/')
charset = (get_param('charset') or 'utf-8').split(',')[0]
return content_type, main_type, sub_type, charset
class GzipDecodedReader(GzipFile):
"""A file-like object to decode a response encoded with the gzip
method, as described in RFC 1952.
Largely copied from ``xmlrpclib``/``xmlrpc.client``
"""
def __init__(self, fp):
if not HAS_GZIP:
raise MissingModuleError(self.missing_gzip_error(), import_traceback=GZIP_IMP_ERR)
self._io = fp
super().__init__(mode='rb', fileobj=self._io)
def close(self):
try:
gzip.GzipFile.close(self)
finally:
self._io.close()
@staticmethod
def missing_gzip_error():
return missing_required_lib(
'gzip',
reason='to decompress gzip encoded responses. '
'Set "decompress" to False, to prevent attempting auto decompression'
)
class HTTPRedirectHandler(urllib.request.HTTPRedirectHandler):
"""This is an implementation of a RedirectHandler to match the
functionality provided by httplib2. It will utilize the value of
``follow_redirects`` to determine how redirects should be handled in
urllib.
"""
def __init__(self, follow_redirects=None):
self.follow_redirects = follow_redirects
def __call__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
return self
try:
urllib.request.HTTPRedirectHandler.http_error_308 # type: ignore[attr-defined]
except AttributeError:
# deprecated: description='urllib http 308 support' python_version='3.11'
http_error_308 = urllib.request.HTTPRedirectHandler.http_error_302
def redirect_request(self, req, fp, code, msg, headers, newurl):
follow_redirects = self.follow_redirects
# Preserve urllib2 compatibility
if follow_redirects in ('urllib2', 'urllib'):
return urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
# Handle disabled redirects
elif follow_redirects in ('no', 'none', False):
raise urllib.error.HTTPError(newurl, code, msg, headers, fp)
method = req.get_method()
# Handle non-redirect HTTP status or invalid follow_redirects
if follow_redirects in ('all', 'yes', True):
if code < 300 or code >= 400:
raise urllib.error.HTTPError(req.get_full_url(), code, msg, headers, fp)
elif follow_redirects == 'safe':
if code < 300 or code >= 400 or method not in ('GET', 'HEAD'):
raise urllib.error.HTTPError(req.get_full_url(), code, msg, headers, fp)
else:
raise urllib.error.HTTPError(req.get_full_url(), code, msg, headers, fp)
data = req.data
origin_req_host = req.origin_req_host
# Be conciliant with URIs containing a space
newurl = newurl.replace(' ', '%20')
# Support redirect with payload and original headers
if code in (307, 308):
# Preserve payload and headers
req_headers = req.headers
else:
# Do not preserve payload and filter headers
data = None
req_headers = {k: v for k, v in req.headers.items()
if k.lower() not in ("content-length", "content-type", "transfer-encoding")}
# http://tools.ietf.org/html/rfc7231#section-6.4.4
if code == 303 and method != 'HEAD':
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if code == 302 and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
if code == 301 and method == 'POST':
method = 'GET'
return urllib.request.Request(
newurl,
data=data,
headers=req_headers,
origin_req_host=origin_req_host,
unverifiable=True,
method=method.upper(),
)
def make_context(cafile=None, cadata=None, capath=None, ciphers=None, validate_certs=True, client_cert=None,
client_key=None):
if ciphers is None:
ciphers = []
if not is_sequence(ciphers):
raise TypeError('Ciphers must be a list. Got %s.' % ciphers.__class__.__name__)
context = ssl.create_default_context(cafile=cafile)
if not validate_certs:
context.options |= ssl.OP_NO_SSLv3
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
# If cafile is passed, we are only using that for verification,
# don't add additional ca certs
if validate_certs and not cafile:
if not cadata:
cadata = bytearray()
cadata.extend(get_ca_certs(capath=capath)[0])
if cadata:
context.load_verify_locations(cadata=cadata)
if ciphers:
context.set_ciphers(':'.join(map(to_native, ciphers)))
if client_cert:
context.load_cert_chain(client_cert, keyfile=client_key)
return context
def get_ca_certs(cafile=None, capath=None):
# tries to find a valid CA cert in one of the
# standard locations for the current distribution
# Using a dict, instead of a set for order, the value is meaningless and will be None
# Not directly using a bytearray to avoid duplicates with fast lookup
cadata = {}
# If cafile is passed, we are only using that for verification,
# don't add additional ca certs
if cafile:
paths_checked = [cafile]
with open(to_bytes(cafile, errors='surrogate_or_strict'), 'r', errors='surrogateescape') as f:
for pem in extract_pem_certs(f.read()):
b_der = ssl.PEM_cert_to_DER_cert(pem)
cadata[b_der] = None
return bytearray().join(cadata), paths_checked
default_verify_paths = ssl.get_default_verify_paths()
default_capath = default_verify_paths.capath
paths_checked = {default_capath or default_verify_paths.cafile}
if capath:
paths_checked.add(capath)
system = to_text(platform.system(), errors='surrogate_or_strict')
# build a list of paths to check for .crt/.pem files
# based on the platform type
if system == u'Linux':
paths_checked.add('/etc/pki/ca-trust/extracted/pem')
paths_checked.add('/etc/pki/tls/certs')
paths_checked.add('/usr/share/ca-certificates/cacert.org')
elif system == u'FreeBSD':
paths_checked.add('/usr/local/share/certs')
elif system == u'OpenBSD':
paths_checked.add('/etc/ssl')
elif system == u'NetBSD':
paths_checked.add('/etc/openssl/certs')
elif system == u'SunOS':
paths_checked.add('/opt/local/etc/openssl/certs')
elif system == u'AIX':
paths_checked.add('/var/ssl/certs')
paths_checked.add('/opt/freeware/etc/ssl/certs')
elif system == u'Darwin':
paths_checked.add('/usr/local/etc/openssl')
# fall back to a user-deployed cert in a standard
# location if the OS platform one is not available
paths_checked.add('/etc/ansible')
# for all of the paths, find any .crt or .pem files
# and compile them into single temp file for use
# in the ssl check to speed up the test
for path in paths_checked:
if not path or path == default_capath or not os.path.isdir(path):
continue
for f in os.listdir(path):
full_path = os.path.join(path, f)
if os.path.isfile(full_path) and os.path.splitext(f)[1] in {'.pem', '.cer', '.crt'}:
try:
with open(full_path, 'r', errors='surrogateescape') as cert_file:
cert = cert_file.read()
try:
for pem in extract_pem_certs(cert):
b_der = ssl.PEM_cert_to_DER_cert(pem)
cadata[b_der] = None
except Exception:
continue
except (OSError, IOError):
pass
# paths_checked isn't used any more, but is kept just for ease of debugging
return bytearray().join(cadata), list(paths_checked)
def getpeercert(response, binary_form=False):
""" Attempt to get the peer certificate of the response from urlopen. """
socket = response.fp.raw._sock
try:
return socket.getpeercert(binary_form)
except AttributeError:
pass # Not HTTPS
def get_channel_binding_cert_hash(certificate_der):
""" Gets the channel binding app data for a TLS connection using the peer cert. """
if not HAS_CRYPTOGRAPHY:
return
# Logic documented in RFC 5929 section 4 https://tools.ietf.org/html/rfc5929#section-4
cert = x509.load_der_x509_certificate(certificate_der, default_backend())
hash_algorithm = None
try:
hash_algorithm = cert.signature_hash_algorithm
except UnsupportedAlgorithm:
pass
# If the signature hash algorithm is unknown/unsupported or md5/sha1 we must use SHA256.
if not hash_algorithm or hash_algorithm.name in ('md5', 'sha1'):
hash_algorithm = hashes.SHA256()
digest = hashes.Hash(hash_algorithm, default_backend())
digest.update(certificate_der)
return digest.finalize()
def rfc2822_date_string(timetuple, zone='-0000'):
"""Accepts a timetuple and optional zone which defaults to ``-0000``
and returns a date string as specified by RFC 2822, e.g.:
Fri, 09 Nov 2001 01:08:47 -0000
Copied from email.utils.formatdate and modified for separate use
"""
return '%s, %02d %s %04d %02d:%02d:%02d %s' % (
['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][timetuple[6]],
timetuple[2],
['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][timetuple[1] - 1],
timetuple[0], timetuple[3], timetuple[4], timetuple[5],
zone)
class Request:
def __init__(self, headers=None, use_proxy=True, force=False, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=False,
follow_redirects='urllib2', client_cert=None, client_key=None, cookies=None, unix_socket=None,
ca_path=None, unredirected_headers=None, decompress=True, ciphers=None, use_netrc=True,
context=None):
"""This class works somewhat similarly to the ``Session`` class of from requests
by defining a cookiejar that can be used across requests as well as cascaded defaults that
can apply to repeated requests
For documentation of params, see ``Request.open``
>>> from ansible.module_utils.urls import Request
>>> r = Request()
>>> r.open('GET', 'http://httpbin.org/cookies/set?k1=v1').read()
'{\n "cookies": {\n "k1": "v1"\n }\n}\n'
>>> r = Request(url_username='user', url_password='passwd')
>>> r.open('GET', 'http://httpbin.org/basic-auth/user/passwd').read()
'{\n "authenticated": true, \n "user": "user"\n}\n'
>>> r = Request(headers=dict(foo='bar'))
>>> r.open('GET', 'http://httpbin.org/get', headers=dict(baz='qux')).read()
"""
self.headers = headers or {}
if not isinstance(self.headers, dict):
raise ValueError("headers must be a dict: %r" % self.headers)
self.use_proxy = use_proxy
self.force = force
self.timeout = timeout
self.validate_certs = validate_certs
self.url_username = url_username
self.url_password = url_password
self.http_agent = http_agent
self.force_basic_auth = force_basic_auth
self.follow_redirects = follow_redirects
self.client_cert = client_cert
self.client_key = client_key
self.unix_socket = unix_socket
self.ca_path = ca_path
self.unredirected_headers = unredirected_headers
self.decompress = decompress
self.ciphers = ciphers
self.use_netrc = use_netrc
self.context = context
if isinstance(cookies, cookiejar.CookieJar):
self.cookies = cookies
else:
self.cookies = cookiejar.CookieJar()
def _fallback(self, value, fallback):
if value is None:
return fallback
return value
def open(self, method, url, data=None, headers=None, use_proxy=None,
force=None, last_mod_time=None, timeout=None, validate_certs=None,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=None, follow_redirects=None,
client_cert=None, client_key=None, cookies=None, use_gssapi=False,
unix_socket=None, ca_path=None, unredirected_headers=None, decompress=None,
ciphers=None, use_netrc=None, context=None):
"""
Sends a request via HTTP(S) or FTP using urllib (Python3)
Does not require the module environment
Returns :class:`HTTPResponse` object.
:arg method: method for the request
:arg url: URL to request
:kwarg data: (optional) bytes, or file-like object to send
in the body of the request
:kwarg headers: (optional) Dictionary of HTTP Headers to send with the
request
:kwarg use_proxy: (optional) Boolean of whether or not to use proxy
:kwarg force: (optional) Boolean of whether or not to set `cache-control: no-cache` header
:kwarg last_mod_time: (optional) Datetime object to use when setting If-Modified-Since header
:kwarg timeout: (optional) How long to wait for the server to send
data before giving up, as a float
:kwarg validate_certs: (optional) Booleani that controls whether we verify
the server's TLS certificate
:kwarg url_username: (optional) String of the user to use when authenticating
:kwarg url_password: (optional) String of the password to use when authenticating
:kwarg http_agent: (optional) String of the User-Agent to use in the request
:kwarg force_basic_auth: (optional) Boolean determining if auth header should be sent in the initial request
:kwarg follow_redirects: (optional) String of urllib2, all/yes, safe, none to determine how redirects are
followed, see HTTPRedirectHandler for more information
:kwarg client_cert: (optional) PEM formatted certificate chain file to be used for SSL client authentication.
This file can also include the key as well, and if the key is included, client_key is not required
:kwarg client_key: (optional) PEM formatted file that contains your private key to be used for SSL client
authentication. If client_cert contains both the certificate and key, this option is not required
:kwarg cookies: (optional) CookieJar object to send with the
request
:kwarg use_gssapi: (optional) Use GSSAPI handler of requests.
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:kwarg decompress: (optional) Whether to attempt to decompress gzip content-encoded responses
:kwarg ciphers: (optional) List of ciphers to use
:kwarg use_netrc: (optional) Boolean determining whether to use credentials from ~/.netrc file
:kwarg context: (optional) ssl.Context object for SSL validation. When provided, all other SSL related
arguments are ignored. See make_context.
:returns: HTTPResponse. Added in Ansible 2.9
"""
if headers is None:
headers = {}
elif not isinstance(headers, dict):
raise ValueError("headers must be a dict")
headers = dict(self.headers, **headers)
use_proxy = self._fallback(use_proxy, self.use_proxy)
force = self._fallback(force, self.force)
timeout = self._fallback(timeout, self.timeout)
validate_certs = self._fallback(validate_certs, self.validate_certs)
url_username = self._fallback(url_username, self.url_username)
url_password = self._fallback(url_password, self.url_password)
http_agent = self._fallback(http_agent, self.http_agent)
force_basic_auth = self._fallback(force_basic_auth, self.force_basic_auth)
follow_redirects = self._fallback(follow_redirects, self.follow_redirects)
client_cert = self._fallback(client_cert, self.client_cert)
client_key = self._fallback(client_key, self.client_key)
cookies = self._fallback(cookies, self.cookies)
unix_socket = self._fallback(unix_socket, self.unix_socket)
ca_path = self._fallback(ca_path, self.ca_path)
unredirected_headers = self._fallback(unredirected_headers, self.unredirected_headers)
decompress = self._fallback(decompress, self.decompress)
ciphers = self._fallback(ciphers, self.ciphers)
use_netrc = self._fallback(use_netrc, self.use_netrc)
context = self._fallback(context, self.context)
handlers = []
if unix_socket:
handlers.append(UnixHTTPHandler(unix_socket))
parsed = urlparse(url)
if parsed.scheme != 'ftp':
username = url_username
password = url_password
if username:
netloc = parsed.netloc
elif '@' in parsed.netloc:
credentials, netloc = parsed.netloc.split('@', 1)
if ':' in credentials:
username, password = credentials.split(':', 1)
else:
username = credentials
password = ''
# reconstruct url without credentials
url = urlunparse(parsed._replace(netloc=netloc))
if use_gssapi:
if HTTPGSSAPIAuthHandler: # type: ignore[truthy-function]
handlers.append(HTTPGSSAPIAuthHandler(username, password))
else:
imp_err_msg = missing_required_lib('gssapi', reason='for use_gssapi=True',
url='https://pypi.org/project/gssapi/')
raise MissingModuleError(imp_err_msg, import_traceback=GSSAPI_IMP_ERR)
elif username and not force_basic_auth:
passman = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# this creates a password manager
passman.add_password(None, netloc, username, password)
# because we have put None at the start it will always
# use this username/password combination for urls
# for which `theurl` is a super-url
authhandler = urllib.request.HTTPBasicAuthHandler(passman)
digest_authhandler = urllib.request.HTTPDigestAuthHandler(passman)
# create the AuthHandler
handlers.append(authhandler)
handlers.append(digest_authhandler)
elif username and force_basic_auth:
headers["Authorization"] = basic_auth_header(username, password)
elif use_netrc:
try:
rc = netrc.netrc(os.environ.get('NETRC'))
login = rc.authenticators(parsed.hostname)
except IOError:
login = None
if login:
username, dummy, password = login
if username and password:
headers["Authorization"] = basic_auth_header(username, password)
if not use_proxy:
proxyhandler = urllib.request.ProxyHandler({})
handlers.append(proxyhandler)
if not context:
context = make_context(
cafile=ca_path,
ciphers=ciphers,
validate_certs=validate_certs,
client_cert=client_cert,
client_key=client_key,
)
if unix_socket:
ssl_handler = UnixHTTPSHandler(unix_socket=unix_socket, context=context)
else:
ssl_handler = urllib.request.HTTPSHandler(context=context)
handlers.append(ssl_handler)
handlers.append(HTTPRedirectHandler(follow_redirects))
# add some nicer cookie handling
if cookies is not None:
handlers.append(urllib.request.HTTPCookieProcessor(cookies))
opener = urllib.request.build_opener(*handlers)
urllib.request.install_opener(opener)
data = to_bytes(data, nonstring='passthru')
request = urllib.request.Request(url, data=data, method=method.upper())
# add the custom agent header, to help prevent issues
# with sites that block the default urllib agent string
if http_agent:
request.add_header('User-agent', http_agent)
# Cache control
# Either we directly force a cache refresh
if force:
request.add_header('cache-control', 'no-cache')
# or we do it if the original is more recent than our copy
elif last_mod_time:
tstamp = rfc2822_date_string(last_mod_time.timetuple(), 'GMT')
request.add_header('If-Modified-Since', tstamp)
# user defined headers now, which may override things we've set above
unredirected_headers = [h.lower() for h in (unredirected_headers or [])]
for header in headers:
if header.lower() in unredirected_headers:
request.add_unredirected_header(header, headers[header])
else:
request.add_header(header, headers[header])
r = urllib.request.urlopen(request, None, timeout)
if decompress and r.headers.get('content-encoding', '').lower() == 'gzip':
fp = GzipDecodedReader(r.fp)
r.fp = fp
# Content-Length does not match gzip decoded length
# Prevent ``r.read`` from stopping at Content-Length
r.length = None
return r
def get(self, url, **kwargs):
r"""Sends a GET request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('GET', url, **kwargs)
def options(self, url, **kwargs):
r"""Sends a OPTIONS request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
r"""Sends a HEAD request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('HEAD', url, **kwargs)
def post(self, url, data=None, **kwargs):
r"""Sends a POST request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('POST', url, data=data, **kwargs)
def put(self, url, data=None, **kwargs):
r"""Sends a PUT request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
r"""Sends a PATCH request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
r"""Sends a DELETE request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwargs \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('DELETE', url, **kwargs)
def open_url(url, data=None, headers=None, method=None, use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2',
client_cert=None, client_key=None, cookies=None,
use_gssapi=False, unix_socket=None, ca_path=None,
unredirected_headers=None, decompress=True, ciphers=None, use_netrc=True):
'''
Sends a request via HTTP(S) or FTP using urllib (Python3)
Does not require the module environment
'''
method = method or ('POST' if data else 'GET')
return Request().open(method, url, data=data, headers=headers, use_proxy=use_proxy,
force=force, last_mod_time=last_mod_time, timeout=timeout, validate_certs=validate_certs,
url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth, follow_redirects=follow_redirects,
client_cert=client_cert, client_key=client_key, cookies=cookies,
use_gssapi=use_gssapi, unix_socket=unix_socket, ca_path=ca_path,
unredirected_headers=unredirected_headers, decompress=decompress, ciphers=ciphers, use_netrc=use_netrc)
def prepare_multipart(fields):
"""Takes a mapping, and prepares a multipart/form-data body
:arg fields: Mapping
:returns: tuple of (content_type, body) where ``content_type`` is
the ``multipart/form-data`` ``Content-Type`` header including
``boundary`` and ``body`` is the prepared bytestring body
Payload content from a file will be base64 encoded and will include
the appropriate ``Content-Transfer-Encoding`` and ``Content-Type``
headers.
Example:
{
"file1": {
"filename": "/bin/true",
"mime_type": "application/octet-stream"
},
"file2": {
"content": "text based file content",
"filename": "fake.txt",
"mime_type": "text/plain",
},
"text_form_field": "value"
}
"""
if not isinstance(fields, Mapping):
raise TypeError(
'Mapping is required, cannot be type %s' % fields.__class__.__name__
)
m = email.mime.multipart.MIMEMultipart('form-data')
for field, value in sorted(fields.items()):
if isinstance(value, str):
main_type = 'text'
sub_type = 'plain'
content = value
filename = None
elif isinstance(value, Mapping):
filename = value.get('filename')
content = value.get('content')
if not any((filename, content)):
raise ValueError('at least one of filename or content must be provided')
mime = value.get('mime_type')
if not mime:
try:
mime = mimetypes.guess_type(filename or '', strict=False)[0] or 'application/octet-stream'
except Exception:
mime = 'application/octet-stream'
main_type, sep, sub_type = mime.partition('/')
else:
raise TypeError(
'value must be a string, or mapping, cannot be type %s' % value.__class__.__name__
)
if not content and filename:
with open(to_bytes(filename, errors='surrogate_or_strict'), 'rb') as f:
part = email.mime.application.MIMEApplication(f.read())
del part['Content-Type']
part.add_header('Content-Type', '%s/%s' % (main_type, sub_type))
else:
part = email.mime.nonmultipart.MIMENonMultipart(main_type, sub_type)
part.set_payload(to_bytes(content))
part.add_header('Content-Disposition', 'form-data')
del part['MIME-Version']
part.set_param(
'name',
field,
header='Content-Disposition'
)
if filename:
part.set_param(
'filename',
to_native(os.path.basename(filename)),
header='Content-Disposition'
)
m.attach(part)
# Ensure headers are not split over multiple lines
# The HTTP policy also uses CRLF by default
b_data = m.as_bytes(policy=email.policy.HTTP)
del m
headers, sep, b_content = b_data.partition(b'\r\n\r\n')
del b_data
parser = email.parser.BytesHeaderParser().parsebytes
return (
parser(headers)['content-type'], # Message converts to native strings
b_content
)
#
# Module-related functions
#
def basic_auth_header(username, password):
"""Takes a username and password and returns a byte string suitable for
using as value of an Authorization header to do basic auth.
"""
if password is None:
password = ''
return b"Basic %s" % base64.b64encode(to_bytes("%s:%s" % (username, password), errors='surrogate_or_strict'))
def url_argument_spec():
'''
Creates an argument spec that can be used with any module
that will be requesting content via urllib/urllib2
'''
return dict(
url=dict(type='str'),
force=dict(type='bool', default=False),
http_agent=dict(type='str', default='ansible-httpget'),
use_proxy=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
url_username=dict(type='str'),
url_password=dict(type='str', no_log=True),
force_basic_auth=dict(type='bool', default=False),
client_cert=dict(type='path'),
client_key=dict(type='path'),
use_gssapi=dict(type='bool', default=False),
)
def fetch_url(module, url, data=None, headers=None, method=None,
use_proxy=None, force=False, last_mod_time=None, timeout=10,
use_gssapi=False, unix_socket=None, ca_path=None, cookies=None, unredirected_headers=None,
decompress=True, ciphers=None, use_netrc=True):
"""Sends a request via HTTP(S) or FTP (needs the module as parameter)
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg use_proxy: (optional) whether or not to use proxy (Default: True)
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg boolean use_gssapi: Default: False
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:kwarg cookies: (optional) CookieJar object to send with the request
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:kwarg decompress: (optional) Whether to attempt to decompress gzip content-encoded responses
:kwarg cipher: (optional) List of ciphers to use
:kwarg boolean use_netrc: (optional) If False: Ignores login and password in ~/.netrc file (Default: True)
:returns: A tuple of (**response**, **info**). Use ``response.read()`` to read the data.
The **info** contains the 'status' and other meta data. When a HttpError (status >= 400)
occurred then ``info['body']`` contains the error response data::
Example::
data={...}
resp, info = fetch_url(module,
"http://example.com",
data=module.jsonify(data),
headers={'Content-type': 'application/json'},
method="POST")
status_code = info["status"]
body = resp.read()
if status_code >= 400 :
body = info['body']
"""
if not HAS_GZIP:
module.fail_json(msg=GzipDecodedReader.missing_gzip_error())
# ensure we use proper tempdir
old_tempdir = tempfile.tempdir
tempfile.tempdir = module.tmpdir
# Get validate_certs from the module params
validate_certs = module.params.get('validate_certs', True)
if use_proxy is None:
use_proxy = module.params.get('use_proxy', True)
username = module.params.get('url_username', '')
password = module.params.get('url_password', '')
http_agent = module.params.get('http_agent', get_user_agent())
force_basic_auth = module.params.get('force_basic_auth', '')
follow_redirects = module.params.get('follow_redirects', 'urllib2')
client_cert = module.params.get('client_cert')
client_key = module.params.get('client_key')
use_gssapi = module.params.get('use_gssapi', use_gssapi)
if not isinstance(cookies, cookiejar.CookieJar):
cookies = cookiejar.CookieJar()
r = None
info = dict(url=url, status=-1)
try:
r = open_url(url, data=data, headers=headers, method=method,
use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout,
validate_certs=validate_certs, url_username=username,
url_password=password, http_agent=http_agent, force_basic_auth=force_basic_auth,
follow_redirects=follow_redirects, client_cert=client_cert,
client_key=client_key, cookies=cookies, use_gssapi=use_gssapi,
unix_socket=unix_socket, ca_path=ca_path, unredirected_headers=unredirected_headers,
decompress=decompress, ciphers=ciphers, use_netrc=use_netrc)
# Lowercase keys, to conform to py2 behavior
info.update({k.lower(): v for k, v in r.info().items()})
# Don't be lossy, append header values for duplicate headers
temp_headers = {}
for name, value in r.headers.items():
# The same as above, lower case keys to match py2 behavior, and create more consistent results
name = name.lower()
if name in temp_headers:
temp_headers[name] = ', '.join((temp_headers[name], value))
else:
temp_headers[name] = value
info.update(temp_headers)
# parse the cookies into a nice dictionary
cookie_list = []
cookie_dict = {}
# Python sorts cookies in order of most specific (ie. longest) path first. See ``CookieJar._cookie_attrs``
# Cookies with the same path are reversed from response order.
# This code makes no assumptions about that, and accepts the order given by python
for cookie in cookies:
cookie_dict[cookie.name] = cookie.value
cookie_list.append((cookie.name, cookie.value))
info['cookies_string'] = '; '.join('%s=%s' % c for c in cookie_list)
info['cookies'] = cookie_dict
# finally update the result with a message about the fetch
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), url=r.geturl(), status=r.code))
except (ConnectionError, ValueError) as e:
module.fail_json(msg=to_native(e), **info)
except MissingModuleError as e:
module.fail_json(msg=to_text(e), exception=e.import_traceback)
except urllib.error.HTTPError as e:
r = e
try:
if e.fp is None:
# Certain HTTPError objects may not have the ability to call ``.read()`` on Python 3
# This is not handled gracefully in Python 3, and instead an exception is raised from
# tempfile, due to ``urllib.response.addinfourl`` not being initialized
raise AttributeError
body = e.read()
except AttributeError:
body = ''
else:
e.close()
# Try to add exception info to the output but don't fail if we can't
try:
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update({k.lower(): v for k, v in e.info().items()})
except Exception:
pass
info.update({'msg': to_native(e), 'body': body, 'status': e.code})
except urllib.error.URLError as e:
code = int(getattr(e, 'code', -1))
info.update(dict(msg="Request failed: %s" % to_native(e), status=code))
except socket.error as e:
info.update(dict(msg="Connection failure: %s" % to_native(e), status=-1))
except http.client.BadStatusLine as e:
info.update(dict(msg="Connection failure: connection was closed before a valid response was received: %s" % to_native(e.line), status=-1))
except Exception as e:
info.update(dict(msg="An unknown error occurred: %s" % to_native(e), status=-1),
exception=traceback.format_exc())
finally:
tempfile.tempdir = old_tempdir
return r, info
def _suffixes(name):
"""A list of the final component's suffixes, if any."""
if name.endswith('.'):
return []
name = name.lstrip('.')
return ['.' + s for s in name.split('.')[1:]]
def _split_multiext(name, min=3, max=4, count=2):
"""Split a multi-part extension from a file name.
Returns '([name minus extension], extension)'.
Define the valid extension length (including the '.') with 'min' and 'max',
'count' sets the number of extensions, counting from the end, to evaluate.
Evaluation stops on the first file extension that is outside the min and max range.
If no valid extensions are found, the original ``name`` is returned
and ``extension`` is empty.
:arg name: File name or path.
:kwarg min: Minimum length of a valid file extension.
:kwarg max: Maximum length of a valid file extension.
:kwarg count: Number of suffixes from the end to evaluate.
"""
extension = ''
for i, sfx in enumerate(reversed(_suffixes(name))):
if i >= count:
break
if min <= len(sfx) <= max:
extension = '%s%s' % (sfx, extension)
name = name.rstrip(sfx)
else:
# Stop on the first invalid extension
break
return name, extension
def fetch_file(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10,
unredirected_headers=None, decompress=True, ciphers=None):
'''Download and save a file via HTTP(S) or FTP (needs the module as parameter).
This is basically a wrapper around fetch_url().
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg boolean use_proxy: Default: True
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:kwarg decompress: (optional) Whether to attempt to decompress gzip content-encoded responses
:kwarg ciphers: (optional) List of ciphers to use
:returns: A string, the path to the downloaded file.
'''
# download file
bufsize = 65536
parts = urlparse(url)
file_prefix, file_ext = _split_multiext(os.path.basename(parts.path), count=2)
fetch_temp_file = tempfile.NamedTemporaryFile(dir=module.tmpdir, prefix=file_prefix, suffix=file_ext, delete=False)
module.add_cleanup_file(fetch_temp_file.name)
try:
rsp, info = fetch_url(module, url, data, headers, method, use_proxy, force, last_mod_time, timeout,
unredirected_headers=unredirected_headers, decompress=decompress, ciphers=ciphers)
if not rsp or (rsp.code and rsp.code >= 400):
module.fail_json(msg="Failure downloading %s, %s" % (url, info['msg']))
data = rsp.read(bufsize)
while data:
fetch_temp_file.write(data)
data = rsp.read(bufsize)
fetch_temp_file.close()
except Exception as e:
module.fail_json(msg="Failure downloading %s, %s" % (url, to_native(e)))
return fetch_temp_file.name
def get_user_agent():
"""Returns a user agent used by open_url"""
return u"ansible-httpget"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,057 |
Improve documentation of BECOME_ALLOW_SAME_USER
|
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#become-allow-same-user
The documentation of BECOME_ALLOW_SAME_USER is quite ambiguous.
> This setting controls if become is skipped when remote user and become user are the same.
Does setting it to true, or to false, skip it? It can be read as "allowing to become the same user", or "allowing to run `become` on the same user".
> If executable, it will be run and the resulting stdout will be used as the password.
If *what* is executable? Can you set BECOME_ALLOW_SAME_USER to a file path pointing to an executable file? The next part seems to say no: `Type: boolean`
|
https://github.com/ansible/ansible/issues/82057
|
https://github.com/ansible/ansible/pull/82059
|
b34f4a559ff3b4521313f5832f93806d1db853c8
|
2908a2c32a81fca78277a22f15fa8e3abe75e092
| 2023-09-07T11:26:10Z |
python
| 2023-10-27T07:21:30Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to choose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept a list of cowsay templates that are 'safe' to use, set to an empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice.
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when the remote user and become user are the same. In other words root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. ``--become-password-file``.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method.
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin.
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables.
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data.
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections.
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: An ordered list of root paths for loading installed Ansible collections content.
description: >
Colon-separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS
deprecated:
why: does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead
version: "2.19"
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
deprecated:
why: does not fit var naming standard, use the singular form collections_path instead
version: "2.19"
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
COLOR_CHANGED:
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status.
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console.
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages.
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages.
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs.
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs.
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs.
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages.
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
name: Color for highlighting
default: white
description: Defines the color to use for highlighting.
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status.
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status.
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status.
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. In other words, those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages.
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. ``--connection-password-file``.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports into.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have their coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default, Ansible will issue a warning when received from a task action (module or action plugin).
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default, Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default, Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon-separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon-separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however, users should first consider adding allow_unsafe=True to any lookups that may be expected to contain data that may be run
through the templating engine late.
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH.'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon-separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon-separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon-separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon-separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon-separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under, which is required for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases, it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fall back to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon-separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) nonportable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience, this is rarely needed and is a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma-separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon-separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon-separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to LXC containers by passing ``--noseclabel`` parameter to ``virsh`` command.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file.
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon-separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant to those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon-separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon-separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon-separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts, this will disable a newer
style PowerShell modules from writing to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying ``--private-key`` with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
- Starting in version '2.17' M(ansible.builtin.include_roles) and M(ansible.builtin.import_roles) can override this via the C(public) parameter.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon-separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list without causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
EDITOR:
name: editor application to use
default: vi
descrioption:
- for the cases in which Ansible needs to return a file within an editor, this chooses the application to use.
ini:
- section: defaults
key: editor
version_added: '2.15'
env:
- name: ANSIBLE_EDITOR
version_added: '2.15'
- name: EDITOR
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, and False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon-separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target.
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon-separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon-separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
name: Connection plugin
default: ssh
description:
- Can be any connection plugin available to your ansible installation.
- There is also a (DEPRECATED) special 'smart' option, that will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions.
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon-separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id.'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided.'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
VAULT_ENCRYPT_SALT:
name: Vault salt to use for encryption
default: ~
description: 'The salt to use for the vault encryption. If it is not provided, a random salt will be used.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_SALT}]
ini:
- {key: vault_encrypt_salt, section: defaults}
version_added: '2.15'
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The ``--encrypt-vault-id`` CLI option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple ``--vault-id`` args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to ``--vault-password-file`` or ``--vault-id``.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel.
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: Number of lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks that have sensitive values
:ref:`keep_secret_data` for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback."
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with a valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default, Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_SERVER_TIMEOUT:
name: Default timeout to use for API calls
description:
- The default timeout for Galaxy API calls. Galaxy servers that don't configure a specific timeout will fall back to this value.
env: [{name: ANSIBLE_GALAXY_SERVER_TIMEOUT}]
default: 60
ini:
- {key: server_timeout, section: galaxy}
type: int
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTIONS_PATH_WARNING:
name: "ansible-galaxy collection install collections path warnings"
description: "whether ``ansible-galaxy collection install`` should warn about ``--collections-path`` missing from configured :ref:`collections_paths`."
default: true
type: bool
env: [{name: ANSIBLE_GALAXY_COLLECTIONS_PATH_WARNING}]
ini:
- {key: collections_path_warning, section: galaxy}
version_added: "2.16"
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputting the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verification.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# NOTE: constant not in use by ssh/paramiko plugins anymore, but they do support the same configuration sources
# TODO: check non ssh connection plugins for use/migration
name: Toggle host/key check
default: True
description:
- Set this to "False" if you want to avoid host key checking by the underlying connection plugin Ansible uses to connect to the host.
- Please read the documentation of the specific connection plugin used for details.
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it.
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backward-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.12
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- /usr/bin/python3
- /usr/libexec/platform-python
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors.
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparsable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source.
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source.
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise, this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display.
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load.
- This is for rejecting script and binary module fallback extensions.
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
MODULE_STRICT_UTF8_RESPONSE:
name: Module strict UTF-8 response
description:
- Enables whether module responses are evaluated for containing non-UTF-8 data.
- Disabling this may result in unexpected behavior.
- Only ansible-core should evaluate this configuration.
env: [{name: ANSIBLE_MODULE_STRICT_UTF8_RESPONSE}]
ini:
- {key: module_strict_utf8_response, section: defaults}
type: bool
default: True
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviors in which a plugin loaded in previous plays would be unexpectedly 'sticky'. This setting allows the user to return to that behavior.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PAGER:
name: pager application to use
default: less
descrioption:
- for the cases in which Ansible needs to return output in a pageable fashion, this chooses the application to use.
ini:
- section: defaults
key: pager
version_added: '2.15'
env:
- name: ANSIBLE_PAGER
version_added: '2.15'
- name: PAGER
PARAMIKO_HOST_KEY_AUTO_ADD:
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
deprecated:
why: This option was moved to the plugin itself
version: "2.20"
alternatives: Use the option from the plugin itself.
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
deprecated:
why: This option was moved to the plugin itself
version: "2.20"
alternatives: Use the option from the plugin itself.
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to the socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for a response from a remote device before timing out a persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on the user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output.'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables."
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running Ansible itself (not on the managed hosts).
- These may include warnings about third-party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.19"
alternatives: There is no alternative at the moment. A different mechanism would have to be implemented in the current code base.
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,517 |
Reboot module doesn't work with async
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `reboot` module does not work with `async`, `poll`, and `async_status`. Suppose I have 10 nodes to reboot, but I can only set `fork` to 2. The `reboot` module will reboot 2 nodes at a time. I tried using `async`, `poll`, and `async_status` to kick of the reboots on the 10 nodes, 2 at a time, and then poll for the results. `async` and `poll` seem to do nothing on the `reboot` module as the behavior remains the same as without them.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`reboot` module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.12
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib64/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, May 2 2019, 19:37:42) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
ANSIBLE_SSH_RETRIES(/etc/ansible/ansible.cfg) = 2
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 2
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 40
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Described in the summary
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the `reboot` module to start the reboot on 2 nodes, then move on to something else (like start the reboot on another 2 nodes), then come back to check on the results of the reboots by using `async`, `poll`, and `async_status`.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The `reboot` module ignores `async` and `poll`.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/71517
|
https://github.com/ansible/ansible/pull/80017
|
2908a2c32a81fca78277a22f15fa8e3abe75e092
|
f8de6caeec735fad53c2fa492c94608e92ebfb06
| 2020-08-28T22:19:52Z |
python
| 2023-10-28T04:14:15Z |
changelogs/fragments/fix-reboot-plugin.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,517 |
Reboot module doesn't work with async
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `reboot` module does not work with `async`, `poll`, and `async_status`. Suppose I have 10 nodes to reboot, but I can only set `fork` to 2. The `reboot` module will reboot 2 nodes at a time. I tried using `async`, `poll`, and `async_status` to kick of the reboots on the 10 nodes, 2 at a time, and then poll for the results. `async` and `poll` seem to do nothing on the `reboot` module as the behavior remains the same as without them.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`reboot` module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.12
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib64/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, May 2 2019, 19:37:42) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
ANSIBLE_SSH_RETRIES(/etc/ansible/ansible.cfg) = 2
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 2
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 40
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Described in the summary
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the `reboot` module to start the reboot on 2 nodes, then move on to something else (like start the reboot on another 2 nodes), then come back to check on the results of the reboots by using `async`, `poll`, and `async_status`.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The `reboot` module ignores `async` and `poll`.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/71517
|
https://github.com/ansible/ansible/pull/80017
|
2908a2c32a81fca78277a22f15fa8e3abe75e092
|
f8de6caeec735fad53c2fa492c94608e92ebfb06
| 2020-08-28T22:19:52Z |
python
| 2023-10-28T04:14:15Z |
lib/ansible/plugins/action/reboot.py
|
# Copyright: (c) 2016-2018, Matt Davis <[email protected]>
# Copyright: (c) 2018, Sam Doran <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
import random
import time
from datetime import datetime, timedelta, timezone
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.common.validation import check_type_list, check_type_str
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset((
'boot_time_command',
'connect_timeout',
'msg',
'post_reboot_delay',
'pre_reboot_delay',
'reboot_command',
'reboot_timeout',
'search_paths',
'test_command',
))
DEFAULT_REBOOT_TIMEOUT = 600
DEFAULT_CONNECT_TIMEOUT = None
DEFAULT_PRE_REBOOT_DELAY = 0
DEFAULT_POST_REBOOT_DELAY = 0
DEFAULT_TEST_COMMAND = 'whoami'
DEFAULT_BOOT_TIME_COMMAND = 'cat /proc/sys/kernel/random/boot_id'
DEFAULT_REBOOT_MESSAGE = 'Reboot initiated by Ansible'
DEFAULT_SHUTDOWN_COMMAND = 'shutdown'
DEFAULT_SHUTDOWN_COMMAND_ARGS = '-r {delay_min} "{message}"'
DEFAULT_SUDOABLE = True
DEPRECATED_ARGS = {} # type: dict[str, str]
BOOT_TIME_COMMANDS = {
'freebsd': '/sbin/sysctl kern.boottime',
'openbsd': '/sbin/sysctl kern.boottime',
'macosx': 'who -b',
'solaris': 'who -b',
'sunos': 'who -b',
'vmkernel': 'grep booted /var/log/vmksummary.log | tail -n 1',
'aix': 'who -b',
}
SHUTDOWN_COMMANDS = {
'alpine': 'reboot',
'vmkernel': 'reboot',
}
SHUTDOWN_COMMAND_ARGS = {
'alpine': '',
'void': '-r +{delay_min} "{message}"',
'freebsd': '-r +{delay_sec}s "{message}"',
'linux': DEFAULT_SHUTDOWN_COMMAND_ARGS,
'macosx': '-r +{delay_min} "{message}"',
'openbsd': '-r +{delay_min} "{message}"',
'solaris': '-y -g {delay_sec} -i 6 "{message}"',
'sunos': '-y -g {delay_sec} -i 6 "{message}"',
'vmkernel': '-d {delay_sec}',
'aix': '-Fr',
}
TEST_COMMANDS = {
'solaris': 'who',
'vmkernel': 'who',
}
def __init__(self, *args, **kwargs):
super(ActionModule, self).__init__(*args, **kwargs)
@property
def pre_reboot_delay(self):
return self._check_delay('pre_reboot_delay', self.DEFAULT_PRE_REBOOT_DELAY)
@property
def post_reboot_delay(self):
return self._check_delay('post_reboot_delay', self.DEFAULT_POST_REBOOT_DELAY)
def _check_delay(self, key, default):
"""Ensure that the value is positive or zero"""
value = int(self._task.args.get(key, self._task.args.get(key + '_sec', default)))
if value < 0:
value = 0
return value
def _get_value_from_facts(self, variable_name, distribution, default_value):
"""Get dist+version specific args first, then distribution, then family, lastly use default"""
attr = getattr(self, variable_name)
value = attr.get(
distribution['name'] + distribution['version'],
attr.get(
distribution['name'],
attr.get(
distribution['family'],
getattr(self, default_value))))
return value
def get_shutdown_command_args(self, distribution):
reboot_command = self._task.args.get('reboot_command')
if reboot_command is not None:
try:
reboot_command = check_type_str(reboot_command, allow_conversion=False)
except TypeError as e:
raise AnsibleError("Invalid value given for 'reboot_command': %s." % to_native(e))
# No args were provided
try:
return reboot_command.split(' ', 1)[1]
except IndexError:
return ''
else:
args = self._get_value_from_facts('SHUTDOWN_COMMAND_ARGS', distribution, 'DEFAULT_SHUTDOWN_COMMAND_ARGS')
# Convert seconds to minutes. If less than 60, set it to 0.
delay_min = self.pre_reboot_delay // 60
reboot_message = self._task.args.get('msg', self.DEFAULT_REBOOT_MESSAGE)
return args.format(delay_sec=self.pre_reboot_delay, delay_min=delay_min, message=reboot_message)
def get_distribution(self, task_vars):
# FIXME: only execute the module if we don't already have the facts we need
distribution = {}
display.debug('{action}: running setup module to get distribution'.format(action=self._task.action))
module_output = self._execute_module(
task_vars=task_vars,
module_name='ansible.legacy.setup',
module_args={'gather_subset': 'min'})
try:
if module_output.get('failed', False):
raise AnsibleError('Failed to determine system distribution. {0}, {1}'.format(
to_native(module_output['module_stdout']).strip(),
to_native(module_output['module_stderr']).strip()))
distribution['name'] = module_output['ansible_facts']['ansible_distribution'].lower()
distribution['version'] = to_text(module_output['ansible_facts']['ansible_distribution_version'].split('.')[0])
distribution['family'] = to_text(module_output['ansible_facts']['ansible_os_family'].lower())
display.debug("{action}: distribution: {dist}".format(action=self._task.action, dist=distribution))
return distribution
except KeyError as ke:
raise AnsibleError('Failed to get distribution information. Missing "{0}" in output.'.format(ke.args[0]))
def get_shutdown_command(self, task_vars, distribution):
reboot_command = self._task.args.get('reboot_command')
if reboot_command is not None:
try:
reboot_command = check_type_str(reboot_command, allow_conversion=False)
except TypeError as e:
raise AnsibleError("Invalid value given for 'reboot_command': %s." % to_native(e))
shutdown_bin = reboot_command.split(' ', 1)[0]
else:
shutdown_bin = self._get_value_from_facts('SHUTDOWN_COMMANDS', distribution, 'DEFAULT_SHUTDOWN_COMMAND')
if shutdown_bin[0] == '/':
return shutdown_bin
else:
default_search_paths = ['/sbin', '/bin', '/usr/sbin', '/usr/bin', '/usr/local/sbin']
search_paths = self._task.args.get('search_paths', default_search_paths)
try:
# Convert bare strings to a list
search_paths = check_type_list(search_paths)
except TypeError:
err_msg = "'search_paths' must be a string or flat list of strings, got {0}"
raise AnsibleError(err_msg.format(search_paths))
display.debug('{action}: running find module looking in {paths} to get path for "{command}"'.format(
action=self._task.action,
command=shutdown_bin,
paths=search_paths))
find_result = self._execute_module(
task_vars=task_vars,
# prevent collection search by calling with ansible.legacy (still allows library/ override of find)
module_name='ansible.legacy.find',
module_args={
'paths': search_paths,
'patterns': [shutdown_bin],
'file_type': 'any'
}
)
full_path = [x['path'] for x in find_result['files']]
if not full_path:
raise AnsibleError('Unable to find command "{0}" in search paths: {1}'.format(shutdown_bin, search_paths))
return full_path[0]
def deprecated_args(self):
for arg, version in self.DEPRECATED_ARGS.items():
if self._task.args.get(arg) is not None:
display.warning("Since Ansible {version}, {arg} is no longer a valid option for {action}".format(
version=version,
arg=arg,
action=self._task.action))
def get_system_boot_time(self, distribution):
boot_time_command = self._get_value_from_facts('BOOT_TIME_COMMANDS', distribution, 'DEFAULT_BOOT_TIME_COMMAND')
if self._task.args.get('boot_time_command'):
boot_time_command = self._task.args.get('boot_time_command')
try:
check_type_str(boot_time_command, allow_conversion=False)
except TypeError as e:
raise AnsibleError("Invalid value given for 'boot_time_command': %s." % to_native(e))
display.debug("{action}: getting boot time with command: '{command}'".format(action=self._task.action, command=boot_time_command))
command_result = self._low_level_execute_command(boot_time_command, sudoable=self.DEFAULT_SUDOABLE)
if command_result['rc'] != 0:
stdout = command_result['stdout']
stderr = command_result['stderr']
raise AnsibleError("{action}: failed to get host boot time info, rc: {rc}, stdout: {out}, stderr: {err}".format(
action=self._task.action,
rc=command_result['rc'],
out=to_native(stdout),
err=to_native(stderr)))
display.debug("{action}: last boot time: {boot}".format(action=self._task.action, boot=command_result['stdout'].strip()))
return command_result['stdout'].strip()
def check_boot_time(self, distribution, previous_boot_time):
display.vvv("{action}: attempting to get system boot time".format(action=self._task.action))
connect_timeout = self._task.args.get('connect_timeout', self._task.args.get('connect_timeout_sec', self.DEFAULT_CONNECT_TIMEOUT))
# override connection timeout from defaults to the custom value
if connect_timeout:
try:
display.debug("{action}: setting connect_timeout to {value}".format(action=self._task.action, value=connect_timeout))
self._connection.set_option("connection_timeout", connect_timeout)
self._connection.reset()
except AttributeError:
display.warning("Connection plugin does not allow the connection timeout to be overridden")
# try and get boot time
try:
current_boot_time = self.get_system_boot_time(distribution)
except Exception as e:
raise e
# FreeBSD returns an empty string immediately before reboot so adding a length
# check to prevent prematurely assuming system has rebooted
if len(current_boot_time) == 0 or current_boot_time == previous_boot_time:
raise ValueError("boot time has not changed")
def run_test_command(self, distribution, **kwargs):
test_command = self._task.args.get('test_command', self._get_value_from_facts('TEST_COMMANDS', distribution, 'DEFAULT_TEST_COMMAND'))
display.vvv("{action}: attempting post-reboot test command".format(action=self._task.action))
display.debug("{action}: attempting post-reboot test command '{command}'".format(action=self._task.action, command=test_command))
try:
command_result = self._low_level_execute_command(test_command, sudoable=self.DEFAULT_SUDOABLE)
except Exception:
# may need to reset the connection in case another reboot occurred
# which has invalidated our connection
try:
self._connection.reset()
except AttributeError:
pass
raise
if command_result['rc'] != 0:
msg = 'Test command failed: {err} {out}'.format(
err=to_native(command_result['stderr']),
out=to_native(command_result['stdout']))
raise RuntimeError(msg)
display.vvv("{action}: system successfully rebooted".format(action=self._task.action))
def do_until_success_or_timeout(self, action, reboot_timeout, action_desc, distribution, action_kwargs=None):
max_end_time = datetime.now(timezone.utc) + timedelta(seconds=reboot_timeout)
if action_kwargs is None:
action_kwargs = {}
fail_count = 0
max_fail_sleep = 12
last_error_msg = ''
while datetime.now(timezone.utc) < max_end_time:
try:
action(distribution=distribution, **action_kwargs)
if action_desc:
display.debug('{action}: {desc} success'.format(action=self._task.action, desc=action_desc))
return
except Exception as e:
if isinstance(e, AnsibleConnectionFailure):
try:
self._connection.reset()
except AnsibleConnectionFailure:
pass
# Use exponential backoff with a max timeout, plus a little bit of randomness
random_int = random.randint(0, 1000) / 1000
fail_sleep = 2 ** fail_count + random_int
if fail_sleep > max_fail_sleep:
fail_sleep = max_fail_sleep + random_int
if action_desc:
try:
error = to_text(e).splitlines()[-1]
except IndexError as e:
error = to_text(e)
last_error_msg = f"{self._task.action}: {action_desc} fail '{error}'"
msg = f"{last_error_msg}, retrying in {fail_sleep:.4f} seconds..."
display.debug(msg)
display.vvv(msg)
fail_count += 1
time.sleep(fail_sleep)
if last_error_msg:
msg = f"Last error message before the timeout exception - {last_error_msg}"
display.debug(msg)
display.vvv(msg)
raise TimedOutException('Timed out waiting for {desc} (timeout={timeout})'.format(desc=action_desc, timeout=reboot_timeout))
def perform_reboot(self, task_vars, distribution):
result = {}
reboot_result = {}
shutdown_command = self.get_shutdown_command(task_vars, distribution)
shutdown_command_args = self.get_shutdown_command_args(distribution)
reboot_command = '{0} {1}'.format(shutdown_command, shutdown_command_args)
try:
display.vvv("{action}: rebooting server...".format(action=self._task.action))
display.debug("{action}: rebooting server with command '{command}'".format(action=self._task.action, command=reboot_command))
reboot_result = self._low_level_execute_command(reboot_command, sudoable=self.DEFAULT_SUDOABLE)
except AnsibleConnectionFailure as e:
# If the connection is closed too quickly due to the system being shutdown, carry on
display.debug('{action}: AnsibleConnectionFailure caught and handled: {error}'.format(action=self._task.action, error=to_text(e)))
reboot_result['rc'] = 0
result['start'] = datetime.now(timezone.utc)
if reboot_result['rc'] != 0:
result['failed'] = True
result['rebooted'] = False
result['msg'] = "Reboot command failed. Error was: '{stdout}, {stderr}'".format(
stdout=to_native(reboot_result['stdout'].strip()),
stderr=to_native(reboot_result['stderr'].strip()))
return result
result['failed'] = False
return result
def validate_reboot(self, distribution, original_connection_timeout=None, action_kwargs=None):
display.vvv('{action}: validating reboot'.format(action=self._task.action))
result = {}
try:
# keep on checking system boot_time with short connection responses
reboot_timeout = int(self._task.args.get('reboot_timeout', self._task.args.get('reboot_timeout_sec', self.DEFAULT_REBOOT_TIMEOUT)))
self.do_until_success_or_timeout(
action=self.check_boot_time,
action_desc="last boot time check",
reboot_timeout=reboot_timeout,
distribution=distribution,
action_kwargs=action_kwargs)
# Get the connect_timeout set on the connection to compare to the original
try:
connect_timeout = self._connection.get_option('connection_timeout')
except KeyError:
pass
else:
if original_connection_timeout != connect_timeout:
try:
display.debug("{action}: setting connect_timeout back to original value of {value}".format(
action=self._task.action,
value=original_connection_timeout))
self._connection.set_option("connection_timeout", original_connection_timeout)
self._connection.reset()
except (AnsibleError, AttributeError) as e:
# reset the connection to clear the custom connection timeout
display.debug("{action}: failed to reset connection_timeout back to default: {error}".format(action=self._task.action,
error=to_text(e)))
# finally run test command to ensure everything is working
# FUTURE: add a stability check (system must remain up for N seconds) to deal with self-multi-reboot updates
self.do_until_success_or_timeout(
action=self.run_test_command,
action_desc="post-reboot test command",
reboot_timeout=reboot_timeout,
distribution=distribution,
action_kwargs=action_kwargs)
result['rebooted'] = True
result['changed'] = True
except TimedOutException as toex:
result['failed'] = True
result['rebooted'] = True
result['msg'] = to_text(toex)
return result
return result
def run(self, tmp=None, task_vars=None):
self._supports_check_mode = True
self._supports_async = True
# If running with local connection, fail so we don't reboot ourselves
if self._connection.transport == 'local':
msg = 'Running {0} with local connection would reboot the control node.'.format(self._task.action)
return {'changed': False, 'elapsed': 0, 'rebooted': False, 'failed': True, 'msg': msg}
if self._play_context.check_mode:
return {'changed': True, 'elapsed': 0, 'rebooted': True}
if task_vars is None:
task_vars = {}
self.deprecated_args()
result = super(ActionModule, self).run(tmp, task_vars)
if result.get('skipped', False) or result.get('failed', False):
return result
distribution = self.get_distribution(task_vars)
# Get current boot time
try:
previous_boot_time = self.get_system_boot_time(distribution)
except Exception as e:
result['failed'] = True
result['reboot'] = False
result['msg'] = to_text(e)
return result
# Get the original connection_timeout option var so it can be reset after
original_connection_timeout = None
try:
original_connection_timeout = self._connection.get_option('connection_timeout')
display.debug("{action}: saving original connect_timeout of {timeout}".format(action=self._task.action, timeout=original_connection_timeout))
except KeyError:
display.debug("{action}: connect_timeout connection option has not been set".format(action=self._task.action))
# Initiate reboot
reboot_result = self.perform_reboot(task_vars, distribution)
if reboot_result['failed']:
result = reboot_result
elapsed = datetime.now(timezone.utc) - reboot_result['start']
result['elapsed'] = elapsed.seconds
return result
if self.post_reboot_delay != 0:
display.debug("{action}: waiting an additional {delay} seconds".format(action=self._task.action, delay=self.post_reboot_delay))
display.vvv("{action}: waiting an additional {delay} seconds".format(action=self._task.action, delay=self.post_reboot_delay))
time.sleep(self.post_reboot_delay)
# Make sure reboot was successful
result = self.validate_reboot(distribution, original_connection_timeout, action_kwargs={'previous_boot_time': previous_boot_time})
elapsed = datetime.now(timezone.utc) - reboot_result['start']
result['elapsed'] = elapsed.seconds
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,110 |
Ansible doesn't respect disable_gpg_check: true
|
### Summary
Ansible doesn't respect disable_gpg_check: true, even if set in yum.conf
```
- ansible.builtin.dnf:
name:
- dnf-plugins-core
update_cache: true
disable_gpg_check: true
state: 'present'
```
```
[main]
gpgcheck=0
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
```
```
TASK [my-docker-role : ansible.builtin.dnf] ********************************************************************************************************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
fatal: [host2]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = /homolog/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Jan 17 2023, 18:53:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/usr/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /homolog/ansible/ansible.cfg
DEFAULT_HOST_LIST(/homolog/ansible/ansible.cfg) = ['/homolog/ansible/inventory']
DEFAULT_REMOTE_USER(/homolog/ansible/ansible.cfg) = centos
DEPRECATION_WARNINGS(/homolog/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/homolog/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/homolog/ansible/ansible.cfg) = /usr/bin/python3.9
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/homolog/ansible/ansible.cfg) = False
remote_user(/homolog/ansible/ansible.cfg) = centos
ssh:
___
host_key_checking(/homolog/ansible/ansible.cfg) = False
remote_user(/homolog/ansible/ansible.cfg) = centos
```
### OS / Environment
```
CentOS Linux release 8.2.2004 (Core)
AWS lightsail
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
# Install the repository RPM:
sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-aarch64/pgdg-redhat-repo-latest.noarch.rpm
```
```
# Disable the built-in PostgreSQL module:
sudo dnf -qy module disable postgresql
```
```
- ansible.builtin.get_url:
url: https://download.docker.com/linux/centos/docker-ce.repo
dest: /etc/yum.repos.d/docer-ce.repo
- ansible.builtin.dnf:
name:
- dnf-plugins-core
update_cache: true
disable_gpg_check: true
state: 'present'
```
### Expected Results
dnf install docker-ce --nogpgcheck
PostgreSQL 14 for RHEL / Rocky 8 - aarch64
PostgreSQL 13 for RHEL / Rocky 8 - aarch64
PostgreSQL 12 for RHEL / Rocky 8 - aarch64
PostgreSQL 11 for RHEL / Rocky 8 - aarch64
Dependencies resolved.
or
dnf install dnf-plugins-core --nogpgcheck
Last metadata expiration check: 0:01:05 ago on Tue 28 Feb 2023 11:44:44 AM -03.
Package dnf-plugins-core-4.0.21-14.el8.noarch is already installed.
Dependencies resolved.
### Actual Results
```console
ansible.builtin.dnf] ********************************************************************************************************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
fatal: [host2]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80110
|
https://github.com/ansible/ansible/pull/80777
|
51d691bfa9e7f9e213947e0ef762dafd2f1e4735
|
5ac62473b09405786ca08e00af4da6d5b3a8103d
| 2023-02-28T14:47:56Z |
python
| 2023-11-06T09:22:35Z |
changelogs/fragments/80110-repos-gpgcheck.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,110 |
Ansible doesn't respect disable_gpg_check: true
|
### Summary
Ansible doesn't respect disable_gpg_check: true, even if set in yum.conf
```
- ansible.builtin.dnf:
name:
- dnf-plugins-core
update_cache: true
disable_gpg_check: true
state: 'present'
```
```
[main]
gpgcheck=0
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
```
```
TASK [my-docker-role : ansible.builtin.dnf] ********************************************************************************************************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
fatal: [host2]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = /homolog/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Jan 17 2023, 18:53:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/usr/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /homolog/ansible/ansible.cfg
DEFAULT_HOST_LIST(/homolog/ansible/ansible.cfg) = ['/homolog/ansible/inventory']
DEFAULT_REMOTE_USER(/homolog/ansible/ansible.cfg) = centos
DEPRECATION_WARNINGS(/homolog/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/homolog/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/homolog/ansible/ansible.cfg) = /usr/bin/python3.9
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/homolog/ansible/ansible.cfg) = False
remote_user(/homolog/ansible/ansible.cfg) = centos
ssh:
___
host_key_checking(/homolog/ansible/ansible.cfg) = False
remote_user(/homolog/ansible/ansible.cfg) = centos
```
### OS / Environment
```
CentOS Linux release 8.2.2004 (Core)
AWS lightsail
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
# Install the repository RPM:
sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-aarch64/pgdg-redhat-repo-latest.noarch.rpm
```
```
# Disable the built-in PostgreSQL module:
sudo dnf -qy module disable postgresql
```
```
- ansible.builtin.get_url:
url: https://download.docker.com/linux/centos/docker-ce.repo
dest: /etc/yum.repos.d/docer-ce.repo
- ansible.builtin.dnf:
name:
- dnf-plugins-core
update_cache: true
disable_gpg_check: true
state: 'present'
```
### Expected Results
dnf install docker-ce --nogpgcheck
PostgreSQL 14 for RHEL / Rocky 8 - aarch64
PostgreSQL 13 for RHEL / Rocky 8 - aarch64
PostgreSQL 12 for RHEL / Rocky 8 - aarch64
PostgreSQL 11 for RHEL / Rocky 8 - aarch64
Dependencies resolved.
or
dnf install dnf-plugins-core --nogpgcheck
Last metadata expiration check: 0:01:05 ago on Tue 28 Feb 2023 11:44:44 AM -03.
Package dnf-plugins-core-4.0.21-14.el8.noarch is already installed.
Dependencies resolved.
### Actual Results
```console
ansible.builtin.dnf] ********************************************************************************************************************************************************************
fatal: [host1]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
fatal: [host2]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'pgdg14': repomd.xml GPG signature verification error: gpgme_op_verify() error: No data", "rc": 1, "results": []}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80110
|
https://github.com/ansible/ansible/pull/80777
|
51d691bfa9e7f9e213947e0ef762dafd2f1e4735
|
5ac62473b09405786ca08e00af4da6d5b3a8103d
| 2023-02-28T14:47:56Z |
python
| 2023-11-06T09:22:35Z |
lib/ansible/modules/dnf.py
|
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
use_backend:
description:
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, dnf4, dnf5 ]
type: str
version_added: 2.15
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to an rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name >= 1.0).
Spaces around the operator are required.
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
aliases:
- pkg
type: list
elements: str
default: []
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks.
Use M(ansible.builtin.package_facts) instead of the O(list) argument as a best practice.
type: str
state:
description:
- Whether to install (V(present), V(latest)), or remove (V(absent)) a package.
- Default is V(None), however in effect the default action is V(present) unless the O(autoremove) option is
enabled for this module, then V(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if O(state) is V(present) or V(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If V(true), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when O(state) is V(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
default: []
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if O(state) is V(present) or V(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if O(state) is V(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to V(true), and O(state=latest) then only installs updates that have been marked security related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to V(true), and O(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
default: []
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
default: []
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to V(all), disables all excludes.
- If set to V(main), disable excludes defined in [main] in dnf.conf.
- If set to V(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to V(false), the SSL certificates will not be validated.
- This should only set to V(false) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to V(false) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the M(ansible.builtin.yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if O(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If V(true) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
ansible.builtin.dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.dnf:
name: httpd >= 2.4
state: present
- name: Install the latest version of Apache and MariaDB
ansible.builtin.dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
ansible.builtin.dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
ansible.builtin.dnf:
name: "*"
state: latest
- name: Update the webserver, depending on which is installed on the system. Do not install the other one
ansible.builtin.dnf:
name:
- httpd
- nginx
state: latest
update_only: yes
- name: Install the nginx rpm from a remote repo
ansible.builtin.dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
ansible.builtin.dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
ansible.builtin.dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
ansible.builtin.dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
ansible.builtin.dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
ansible.builtin.dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
ansible.builtin.dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
ansible.builtin.dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
ansible.builtin.dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# because we need AnsibleModule object to use get_best_parsable_locale()
# to set proper locale before importing dnf to be able to scrape
# the output in some cases (FIXME?).
dnf = None
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
# envra format for alignment with the yum module
result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result)
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
result['nevra'] = result['envra']
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _split_package_arch(self, packagename):
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
name, delimiter, arch = packagename.rpartition('.')
if name and arch and arch in redhat_rpm_arches:
return name, arch
return packagename, None
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
nevr, arch = self._split_package_arch(packagename)
if arch:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
return rc
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = locale
os.environ['LANGUAGE'] = os.environ['LANG'] = locale
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.package
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/', sslverify=True):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set certificate validation
conf.sslverify = sslverify
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
if conf.substitutions.get('releasever') is None:
self.module.warn(
'Unable to detect release version (use "releasever" option to specify release version)'
)
# values of conf.substitutions are expected to be strings
# setting this to an empty string instead of None appears to mimic the DNF CLI behavior
conf.substitutions['releasever'] = ''
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot, sslverify):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot, sslverify)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
add_security_filters = getattr(base, "add_security_filters", None)
if callable(add_security_filters):
filters = {}
if self.bugfix:
filters.setdefault('types', []).append('bugfix')
if self.security:
filters.setdefault('types', []).append('security')
if filters:
add_security_filters('eq', **filters)
else:
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
package_spec = {}
name, arch = self._split_package_arch(pkg)
if arch:
package_spec['arch'] = arch
package_details = self._packagename_dict(pkg)
if package_details:
package_details['epoch'] = int(package_details['epoch'])
package_spec.update(package_details)
else:
package_spec['name'] = name
return bool(installed.filter(**package_spec))
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
return evr_cmp == 1
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed: # Case 1
# TODO: Is this case reachable?
#
# _is_installed() demands a name (*not* NVR) or else is always False
# (wildcards are treated literally).
#
# Meanwhile, _is_newer_version_installed() demands something versioned
# or else is always false.
#
# I fail to see how they can both be true at the same time for any
# given pkg_spec. -re
self.base.upgrade(pkg_spec)
else: # Case 2
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 3
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 4, Nothing to do, report back
pass
elif is_installed: # A potentially older (or same) version is installed
if upgrade: # Case 5
self.base.upgrade(pkg_spec)
else: # Case 6, Nothing to do, report back
pass
else: # Case 7, The package is not installed, simply install it
self.base.install(pkg_spec, strict=self.base.conf.strict)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(
self._package_dict(pkg)["nevra"] if isinstance(pkg, dnf.package.Package) else pkg
):
try:
if isinstance(pkg, dnf.package.Package):
self.base.package_upgrade(pkg)
else:
self.base.upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg, strict=self.base.conf.strict)
else:
self.base.package_install(pkg, strict=self.base.conf.strict)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# Previously we forced base.conf.best=True here.
# However in 2.11+ there is a self.nobest option, so defer to that.
# Note, however, that just because nobest isn't set, doesn't mean that
# base.conf.best is actually true. We only force it false in
# _configure_base(), we never set it to true, and it can default to false.
# Thus, we still need to explicitly set it here.
self.base.conf.best = not self.nobest
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
# NOTE for people who go down the rabbit hole of figuring out why
# resolve() throws DepsolveError here on dep conflict, but not when
# called from the CLI: It's controlled by conf.best. When best is
# set, Hawkey will fail the goal, and resolve() in dnf.base.Base
# will throw. Otherwise if it's not set, the update (install) will
# be (almost silently) removed from the goal, and Hawkey will report
# success. Note that in this case, similar to the CLI, skip_broken
# does nothing to help here, so we don't take it into account at
# all.
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}: {1}'.format(package, gpgerr)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
tid = self.base.do_transaction()
if tid is not None:
transaction = self.base.history.old([tid])[0]
if transaction.return_code:
failure_response['failures'].append(transaction.output())
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not self.download_only and not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'dnf4', 'dnf5'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,474 |
Role parameter: change in variable precedence
|
### Summary
Hi ;
Since Ansible 8.0.0 (I think, I didn't check every ansible-core version), it seams that role parameters do not take precedence over already define facts.
According to the [documentation](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence), role (and include_role) params (line 20) should still take precedence over set_facts / registered vars (line 19). I also checked [Ansible 8.x changelogs](https://github.com/ansible-community/ansible-build-data/blob/main/8/CHANGELOG-v8.rst) but I didn't see anything about that, except maybe [this bug fix](https://github.com/ansible-community/ansible-build-data/blob/a8cf3895cd98246316ab6172ec684935e0013b45/8/CHANGELOG-v8.rst#L3397), I'm not sure what `Also adjusted the precedence to act the same as inline params` means and what are the expected impacts. But if this is a wanted new behavior, I feal like it should be details in major changes, not in the bug fix section.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Tested on Archlinux and also in python:3.11 docker container.
### Steps to Reproduce
Create `roles/test_set/tasks/main.yml` with:
```
---
- name: "Set test fact."
ansible.builtin.set_fact:
test_set_one: "set by test_set"
```
Create `roles/test_debug/tasks/main.yml` with:
```
---
- name: "Debug the variable."
ansible.builtin.debug:
var: test_set_one
```
Create `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
roles:
- test_set
- test_debug
- role: test_debug
test_set_one: "Set as role parameter"
```
Run: `ansible-playbook test.yml`.
### Expected Results
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "Set as role parameter"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This is a the result with ansible 7.x, here the corresponding version used to obtain it:
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81474
|
https://github.com/ansible/ansible/pull/82106
|
5ac62473b09405786ca08e00af4da6d5b3a8103d
|
20a54eb236a4f77402daa0d7cdaede358587c821
| 2023-08-09T09:06:12Z |
python
| 2023-11-06T14:18:35Z |
changelogs/fragments/restore_role_param_precedence.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,474 |
Role parameter: change in variable precedence
|
### Summary
Hi ;
Since Ansible 8.0.0 (I think, I didn't check every ansible-core version), it seams that role parameters do not take precedence over already define facts.
According to the [documentation](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence), role (and include_role) params (line 20) should still take precedence over set_facts / registered vars (line 19). I also checked [Ansible 8.x changelogs](https://github.com/ansible-community/ansible-build-data/blob/main/8/CHANGELOG-v8.rst) but I didn't see anything about that, except maybe [this bug fix](https://github.com/ansible-community/ansible-build-data/blob/a8cf3895cd98246316ab6172ec684935e0013b45/8/CHANGELOG-v8.rst#L3397), I'm not sure what `Also adjusted the precedence to act the same as inline params` means and what are the expected impacts. But if this is a wanted new behavior, I feal like it should be details in major changes, not in the bug fix section.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Tested on Archlinux and also in python:3.11 docker container.
### Steps to Reproduce
Create `roles/test_set/tasks/main.yml` with:
```
---
- name: "Set test fact."
ansible.builtin.set_fact:
test_set_one: "set by test_set"
```
Create `roles/test_debug/tasks/main.yml` with:
```
---
- name: "Debug the variable."
ansible.builtin.debug:
var: test_set_one
```
Create `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
roles:
- test_set
- test_debug
- role: test_debug
test_set_one: "Set as role parameter"
```
Run: `ansible-playbook test.yml`.
### Expected Results
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "Set as role parameter"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This is a the result with ansible 7.x, here the corresponding version used to obtain it:
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81474
|
https://github.com/ansible/ansible/pull/82106
|
5ac62473b09405786ca08e00af4da6d5b3a8103d
|
20a54eb236a4f77402daa0d7cdaede358587c821
| 2023-08-09T09:06:12Z |
python
| 2023-11-06T14:18:35Z |
lib/ansible/vars/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import annotations
import os
import sys
from collections import defaultdict
from collections.abc import Mapping, MutableMapping, Sequence
from hashlib import sha1
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleAssertionError, AnsibleTemplateError
from ansible.inventory.host import Host
from ansible.inventory.helpers import sort_groups, get_group_vars
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.six import text_type, string_types
from ansible.plugins.loader import lookup_loader
from ansible.vars.fact_cache import FactCache
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.vars import combine_vars, load_extra_vars, load_options_vars
from ansible.utils.unsafe_proxy import wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path
display = Display()
def preprocess_vars(a):
'''
Ensures that vars contained in the parameter passed in are
returned as a list of dictionaries, to ensure for instance
that vars loaded from a file conform to an expected state.
'''
if a is None:
return None
elif not isinstance(a, list):
data = [a]
else:
data = a
for item in data:
if not isinstance(item, MutableMapping):
raise AnsibleError("variable files must contain either a dictionary of variables, or a list of dictionaries. Got: %s (%s)" % (a, type(a)))
return data
class VariableManager:
_ALLOWED = frozenset(['plugins_by_group', 'groups_plugins_play', 'groups_plugins_inventory', 'groups_inventory',
'all_plugins_play', 'all_plugins_inventory', 'all_inventory'])
def __init__(self, loader=None, inventory=None, version_info=None):
self._nonpersistent_fact_cache = defaultdict(dict)
self._vars_cache = defaultdict(dict)
self._extra_vars = defaultdict(dict)
self._host_vars_files = defaultdict(dict)
self._group_vars_files = defaultdict(dict)
self._inventory = inventory
self._loader = loader
self._hostvars = None
self._omit_token = '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest()
self._options_vars = load_options_vars(version_info)
# If the basedir is specified as the empty string then it results in cwd being used.
# This is not a safe location to load vars from.
basedir = self._options_vars.get('basedir', False)
self.safe_basedir = bool(basedir is False or basedir)
# load extra vars
self._extra_vars = load_extra_vars(loader=self._loader)
# load fact cache
try:
self._fact_cache = FactCache()
except AnsibleError as e:
# bad cache plugin is not fatal error
# fallback to a dict as in memory cache
display.warning(to_text(e))
self._fact_cache = {}
def __getstate__(self):
data = dict(
fact_cache=self._fact_cache,
np_fact_cache=self._nonpersistent_fact_cache,
vars_cache=self._vars_cache,
extra_vars=self._extra_vars,
host_vars_files=self._host_vars_files,
group_vars_files=self._group_vars_files,
omit_token=self._omit_token,
options_vars=self._options_vars,
inventory=self._inventory,
safe_basedir=self.safe_basedir,
)
return data
def __setstate__(self, data):
self._fact_cache = data.get('fact_cache', defaultdict(dict))
self._nonpersistent_fact_cache = data.get('np_fact_cache', defaultdict(dict))
self._vars_cache = data.get('vars_cache', defaultdict(dict))
self._extra_vars = data.get('extra_vars', dict())
self._host_vars_files = data.get('host_vars_files', defaultdict(dict))
self._group_vars_files = data.get('group_vars_files', defaultdict(dict))
self._omit_token = data.get('omit_token', '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest())
self._inventory = data.get('inventory', None)
self._options_vars = data.get('options_vars', dict())
self.safe_basedir = data.get('safe_basedir', False)
self._loader = None
self._hostvars = None
@property
def extra_vars(self):
return self._extra_vars
def set_inventory(self, inventory):
self._inventory = inventory
def get_vars(self, play=None, host=None, task=None, include_hostvars=True, include_delegate_to=False, use_cache=True,
_hosts=None, _hosts_all=None, stage='task'):
'''
Returns the variables, with optional "context" given via the parameters
for the play, host, and task (which could possibly result in different
sets of variables being returned due to the additional context).
The order of precedence is:
- play->roles->get_default_vars (if there is a play context)
- group_vars_files[host] (if there is a host context)
- host_vars_files[host] (if there is a host context)
- host->get_vars (if there is a host context)
- fact_cache[host] (if there is a host context)
- play vars (if there is a play context)
- play vars_files (if there's no host context, ignore
file names that cannot be templated)
- task->get_vars (if there is a task context)
- vars_cache[host] (if there is a host context)
- extra vars
``_hosts`` and ``_hosts_all`` should be considered private args, with only internal trusted callers relying
on the functionality they provide. These arguments may be removed at a later date without a deprecation
period and without warning.
'''
display.debug("in VariableManager get_vars()")
all_vars = dict()
magic_variables = self._get_magic_variables(
play=play,
host=host,
task=task,
include_hostvars=include_hostvars,
_hosts=_hosts,
_hosts_all=_hosts_all,
)
_vars_sources = {}
def _combine_and_track(data, new_data, source):
'''
Wrapper function to update var sources dict and call combine_vars()
See notes in the VarsWithSources docstring for caveats and limitations of the source tracking
'''
if new_data == {}:
return data
if C.DEFAULT_DEBUG:
# Populate var sources dict
for key in new_data:
_vars_sources[key] = source
return combine_vars(data, new_data)
# default for all cases
basedirs = []
if self.safe_basedir: # avoid adhoc/console loading cwd
basedirs = [self._loader.get_basedir()]
if play:
for role in play.get_roles():
# role is public and
# either static or dynamic and completed
# role is not set
# use config option as default
role_is_static_or_completed = role.static or role._completed.get(host.name, False)
if role.public and role_is_static_or_completed or \
role.public is None and not C.DEFAULT_PRIVATE_ROLE_VARS and role_is_static_or_completed:
all_vars = _combine_and_track(all_vars, role.get_default_vars(), "role '%s' defaults" % role.name)
if task:
# set basedirs
if C.PLAYBOOK_VARS_ROOT == 'all': # should be default
basedirs = task.get_search_path()
elif C.PLAYBOOK_VARS_ROOT in ('bottom', 'playbook_dir'): # only option in 2.4.0
basedirs = [task.get_search_path()[0]]
elif C.PLAYBOOK_VARS_ROOT != 'top':
# preserves default basedirs, only option pre 2.3
raise AnsibleError('Unknown playbook vars logic: %s' % C.PLAYBOOK_VARS_ROOT)
# if we have a task in this context, and that task has a role, make
# sure it sees its defaults above any other roles, as we previously
# (v1) made sure each task had a copy of its roles default vars
# TODO: investigate why we need play or include_role check?
if task._role is not None and (play or task.action in C._ACTION_INCLUDE_ROLE):
all_vars = _combine_and_track(all_vars, task._role.get_default_vars(dep_chain=task.get_dep_chain()),
"role '%s' defaults" % task._role.name)
if host:
# THE 'all' group and the rest of groups for a host, used below
all_group = self._inventory.groups.get('all')
host_groups = sort_groups([g for g in host.get_groups() if g.name not in ['all']])
def _get_plugin_vars(plugin, path, entities):
data = {}
try:
data = plugin.get_vars(self._loader, path, entities)
except AttributeError:
try:
for entity in entities:
if isinstance(entity, Host):
data |= plugin.get_host_vars(entity.name)
else:
data |= plugin.get_group_vars(entity.name)
except AttributeError:
if hasattr(plugin, 'run'):
raise AnsibleError("Cannot use v1 type vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
else:
raise AnsibleError("Invalid vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
return data
# internal functions that actually do the work
def _plugins_inventory(entities):
''' merges all entities by inventory source '''
return get_vars_from_inventory_sources(self._loader, self._inventory._sources, entities, stage)
def _plugins_play(entities):
''' merges all entities adjacent to play '''
data = {}
for path in basedirs:
data = _combine_and_track(data, get_vars_from_path(self._loader, path, entities, stage), "path '%s'" % path)
return data
# configurable functions that are sortable via config, remember to add to _ALLOWED if expanding this list
def all_inventory():
return all_group.get_vars()
def all_plugins_inventory():
return _plugins_inventory([all_group])
def all_plugins_play():
return _plugins_play([all_group])
def groups_inventory():
''' gets group vars from inventory '''
return get_group_vars(host_groups)
def groups_plugins_inventory():
''' gets plugin sources from inventory for groups '''
return _plugins_inventory(host_groups)
def groups_plugins_play():
''' gets plugin sources from play for groups '''
return _plugins_play(host_groups)
def plugins_by_groups():
'''
merges all plugin sources by group,
This should be used instead, NOT in combination with the other groups_plugins* functions
'''
data = {}
for group in host_groups:
data[group] = _combine_and_track(data[group], _plugins_inventory(group), "inventory group_vars for '%s'" % group)
data[group] = _combine_and_track(data[group], _plugins_play(group), "playbook group_vars for '%s'" % group)
return data
# Merge groups as per precedence config
# only allow to call the functions we want exposed
for entry in C.VARIABLE_PRECEDENCE:
if entry in self._ALLOWED:
display.debug('Calling %s to load vars for %s' % (entry, host.name))
all_vars = _combine_and_track(all_vars, locals()[entry](), "group vars, precedence entry '%s'" % entry)
else:
display.warning('Ignoring unknown variable precedence entry: %s' % (entry))
# host vars, from inventory, inventory adjacent and play adjacent via plugins
all_vars = _combine_and_track(all_vars, host.get_vars(), "host vars for '%s'" % host)
all_vars = _combine_and_track(all_vars, _plugins_inventory([host]), "inventory host_vars for '%s'" % host)
all_vars = _combine_and_track(all_vars, _plugins_play([host]), "playbook host_vars for '%s'" % host)
# finally, the facts caches for this host, if it exists
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
try:
facts = wrap_var(self._fact_cache.get(host.name, {}))
all_vars |= namespace_facts(facts)
# push facts to main namespace
if C.INJECT_FACTS_AS_VARS:
all_vars = _combine_and_track(all_vars, wrap_var(clean_facts(facts)), "facts")
else:
# always 'promote' ansible_local
all_vars = _combine_and_track(all_vars, wrap_var({'ansible_local': facts.get('ansible_local', {})}), "facts")
except KeyError:
pass
if play:
all_vars = _combine_and_track(all_vars, play.get_vars(), "play vars")
vars_files = play.get_vars_files()
try:
for vars_file_item in vars_files:
# create a set of temporary vars here, which incorporate the extra
# and magic vars so we can properly template the vars_files entries
# NOTE: this makes them depend on host vars/facts so things like
# ansible_facts['os_distribution'] can be used, ala include_vars.
# Consider DEPRECATING this in the future, since we have include_vars ...
temp_vars = combine_vars(all_vars, self._extra_vars)
temp_vars = combine_vars(temp_vars, magic_variables)
templar = Templar(loader=self._loader, variables=temp_vars)
# we assume each item in the list is itself a list, as we
# support "conditional includes" for vars_files, which mimics
# the with_first_found mechanism.
vars_file_list = vars_file_item
if not isinstance(vars_file_list, list):
vars_file_list = [vars_file_list]
# now we iterate through the (potential) files, and break out
# as soon as we read one from the list. If none are found, we
# raise an error, which is silently ignored at this point.
try:
for vars_file in vars_file_list:
vars_file = templar.template(vars_file)
if not (isinstance(vars_file, Sequence)):
raise AnsibleError(
"Invalid vars_files entry found: %r\n"
"vars_files entries should be either a string type or "
"a list of string types after template expansion" % vars_file
)
try:
play_search_stack = play.get_search_path()
found_file = real_file = self._loader.path_dwim_relative_stack(play_search_stack, 'vars', vars_file)
data = preprocess_vars(self._loader.load_from_file(found_file, unsafe=True, cache=False))
if data is not None:
for item in data:
all_vars = _combine_and_track(all_vars, item, "play vars_files from '%s'" % vars_file)
break
except AnsibleFileNotFound:
# we continue on loader failures
continue
except AnsibleParserError:
raise
else:
# if include_delegate_to is set to False or we don't have a host, we ignore the missing
# vars file here because we're working on a delegated host or require host vars, see NOTE above
if include_delegate_to and host:
raise AnsibleFileNotFound("vars file %s was not found" % vars_file_item)
except (UndefinedError, AnsibleUndefinedVariable):
if host is not None and self._fact_cache.get(host.name, dict()).get('module_setup') and task is not None:
raise AnsibleUndefinedVariable("an undefined variable was found when attempting to template the vars_files item '%s'"
% vars_file_item, obj=vars_file_item)
else:
# we do not have a full context here, and the missing variable could be because of that
# so just show a warning and continue
display.vvv("skipping vars_file '%s' due to an undefined variable" % vars_file_item)
continue
display.vvv("Read vars_file '%s'" % vars_file_item)
except TypeError:
raise AnsibleParserError("Error while reading vars files - please supply a list of file names. "
"Got '%s' of type %s" % (vars_files, type(vars_files)))
# We now merge in all exported vars from all roles in the play,
# unless the user has disabled this
# role is public and
# either static or dynamic and completed
# role is not set
# use config option as default
for role in play.get_roles():
role_is_static_or_completed = role.static or role._completed.get(host.name, False)
if role.public and role_is_static_or_completed or \
role.public is None and not C.DEFAULT_PRIVATE_ROLE_VARS and role_is_static_or_completed:
all_vars = _combine_and_track(all_vars, role.get_vars(include_params=False, only_exports=True), "role '%s' exported vars" % role.name)
# next, we merge in the vars from the role, which will specifically
# follow the role dependency chain, and then we merge in the tasks
# vars (which will look at parent blocks/task includes)
if task:
if task._role:
all_vars = _combine_and_track(all_vars, task._role.get_vars(task.get_dep_chain(), include_params=True, only_exports=False),
"role '%s' all vars" % task._role.name)
all_vars = _combine_and_track(all_vars, task.get_vars(), "task vars")
# next, we merge in the vars cache (include vars) and nonpersistent
# facts cache (set_fact/register), in that order
if host:
# include_vars non-persistent cache
all_vars = _combine_and_track(all_vars, self._vars_cache.get(host.get_name(), dict()), "include_vars")
# fact non-persistent cache
all_vars = _combine_and_track(all_vars, self._nonpersistent_fact_cache.get(host.name, dict()), "set_fact")
# next, we merge in role params and task include params
if task:
# special case for include tasks, where the include params
# may be specified in the vars field for the task, which should
# have higher precedence than the vars/np facts above
all_vars = _combine_and_track(all_vars, task.get_include_params(), "include params")
# extra vars
all_vars = _combine_and_track(all_vars, self._extra_vars, "extra vars")
# magic variables
all_vars = _combine_and_track(all_vars, magic_variables, "magic vars")
# special case for the 'environment' magic variable, as someone
# may have set it as a variable and we don't want to stomp on it
if task:
all_vars['environment'] = task.environment
# 'vars' magic var
if task or play:
# has to be copy, otherwise recursive ref
all_vars['vars'] = all_vars.copy()
# if we have a host and task and we're delegating to another host,
# figure out the variables for that host now so we don't have to rely on host vars later
if task and host and task.delegate_to is not None and include_delegate_to:
all_vars['ansible_delegated_vars'], all_vars['_ansible_loop_cache'] = self._get_delegated_vars(play, task, all_vars)
display.debug("done with get_vars()")
if C.DEFAULT_DEBUG:
# Use VarsWithSources wrapper class to display var sources
return VarsWithSources.new_vars_with_sources(all_vars, _vars_sources)
else:
return all_vars
def _get_magic_variables(self, play, host, task, include_hostvars, _hosts=None, _hosts_all=None):
'''
Returns a dictionary of so-called "magic" variables in Ansible,
which are special variables we set internally for use.
'''
variables = {}
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
variables['ansible_playbook_python'] = sys.executable
variables['ansible_config_file'] = C.CONFIG_FILE
if play:
# This is a list of all role names of all dependencies for all roles for this play
dependency_role_names = list({d.get_name() for r in play.roles for d in r.get_all_dependencies()})
# This is a list of all role names of all roles for this play
play_role_names = [r.get_name() for r in play.roles]
# ansible_role_names includes all role names, dependent or directly referenced by the play
variables['ansible_role_names'] = list(set(dependency_role_names + play_role_names))
# ansible_play_role_names includes the names of all roles directly referenced by this play
# roles that are implicitly referenced via dependencies are not listed.
variables['ansible_play_role_names'] = play_role_names
# ansible_dependent_role_names includes the names of all roles that are referenced via dependencies
# dependencies that are also explicitly named as roles are included in this list
variables['ansible_dependent_role_names'] = dependency_role_names
# DEPRECATED: role_names should be deprecated in favor of ansible_role_names or ansible_play_role_names
variables['role_names'] = variables['ansible_play_role_names']
variables['ansible_play_name'] = play.get_name()
if task:
if task._role:
variables['role_name'] = task._role.get_name(include_role_fqcn=False)
variables['role_path'] = task._role._role_path
variables['role_uuid'] = text_type(task._role._uuid)
variables['ansible_collection_name'] = task._role._role_collection
variables['ansible_role_name'] = task._role.get_name()
if self._inventory is not None:
variables['groups'] = self._inventory.get_groups_dict()
if play:
templar = Templar(loader=self._loader)
if not play.finalized and templar.is_template(play.hosts):
pattern = 'all'
else:
pattern = play.hosts or 'all'
# add the list of hosts in the play, as adjusted for limit/filters
if not _hosts_all:
_hosts_all = [h.name for h in self._inventory.get_hosts(pattern=pattern, ignore_restrictions=True)]
if not _hosts:
_hosts = [h.name for h in self._inventory.get_hosts()]
variables['ansible_play_hosts_all'] = _hosts_all[:]
variables['ansible_play_hosts'] = [x for x in variables['ansible_play_hosts_all'] if x not in play._removed_hosts]
variables['ansible_play_batch'] = [x for x in _hosts if x not in play._removed_hosts]
# DEPRECATED: play_hosts should be deprecated in favor of ansible_play_batch,
# however this would take work in the templating engine, so for now we'll add both
variables['play_hosts'] = variables['ansible_play_batch']
# the 'omit' value allows params to be left out if the variable they are based on is undefined
variables['omit'] = self._omit_token
# Set options vars
for option, option_value in self._options_vars.items():
variables[option] = option_value
if self._hostvars is not None and include_hostvars:
variables['hostvars'] = self._hostvars
return variables
def get_delegated_vars_and_hostname(self, templar, task, variables):
"""Get the delegated_vars for an individual task invocation, which may be be in the context
of an individual loop iteration.
Not used directly be VariableManager, but used primarily within TaskExecutor
"""
delegated_vars = {}
delegated_host_name = None
if task.delegate_to:
delegated_host_name = templar.template(task.delegate_to, fail_on_undefined=False)
delegated_host = self._inventory.get_host(delegated_host_name)
if delegated_host is None:
for h in self._inventory.get_hosts(ignore_limits=True, ignore_restrictions=True):
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
if h.address == delegated_host_name:
delegated_host = h
break
else:
delegated_host = Host(name=delegated_host_name)
delegated_vars['ansible_delegated_vars'] = {
delegated_host_name: self.get_vars(
play=task.get_play(),
host=delegated_host,
task=task,
include_delegate_to=False,
include_hostvars=True,
)
}
delegated_vars['ansible_delegated_vars'][delegated_host_name]['inventory_hostname'] = variables.get('inventory_hostname')
return delegated_vars, delegated_host_name
def _get_delegated_vars(self, play, task, existing_variables):
# This method has a lot of code copied from ``TaskExecutor._get_loop_items``
# if this is failing, and ``TaskExecutor._get_loop_items`` is not
# then more will have to be copied here.
# TODO: dedupe code here and with ``TaskExecutor._get_loop_items``
# this may be possible once we move pre-processing pre fork
if not hasattr(task, 'loop'):
# This "task" is not a Task, so we need to skip it
return {}, None
display.deprecated(
'Getting delegated variables via get_vars is no longer used, and is handled within the TaskExecutor.',
version='2.18',
)
# we unfortunately need to template the delegate_to field here,
# as we're fetching vars before post_validate has been called on
# the task that has been passed in
vars_copy = existing_variables.copy()
# get search path for this task to pass to lookup plugins
vars_copy['ansible_search_path'] = task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in vars_copy['ansible_search_path']:
vars_copy['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=vars_copy)
items = []
has_loop = True
if task.loop_with is not None:
if task.loop_with in lookup_loader:
fail = True
if task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next
fail = False
try:
loop_terms = listify_lookup_plugin_terms(terms=task.loop, templar=templar, fail_on_undefined=fail, convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
mylookup = lookup_loader.get(task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
items = wrap_var(mylookup.run(terms=loop_terms, variables=vars_copy))
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
raise AnsibleError("Failed to find the lookup named '%s' in the available lookup plugins" % task.loop_with)
elif task.loop is not None:
try:
items = templar.template(task.loop)
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
has_loop = False
items = [None]
# since host can change per loop, we keep dict per host name resolved
delegated_host_vars = dict()
item_var = getattr(task.loop_control, 'loop_var', 'item')
cache_items = False
for item in items:
# update the variables with the item value for templating, in case we need it
if item is not None:
vars_copy[item_var] = item
templar.available_variables = vars_copy
delegated_host_name = templar.template(task.delegate_to, fail_on_undefined=False)
if delegated_host_name != task.delegate_to:
cache_items = True
if delegated_host_name is None:
raise AnsibleError(message="Undefined delegate_to host for task:", obj=task._ds)
if not isinstance(delegated_host_name, string_types):
raise AnsibleError(message="the field 'delegate_to' has an invalid type (%s), and could not be"
" converted to a string type." % type(delegated_host_name), obj=task._ds)
if delegated_host_name in delegated_host_vars:
# no need to repeat ourselves, as the delegate_to value
# does not appear to be tied to the loop item variable
continue
# now try to find the delegated-to host in inventory, or failing that,
# create a new host on the fly so we can fetch variables for it
delegated_host = None
if self._inventory is not None:
delegated_host = self._inventory.get_host(delegated_host_name)
# try looking it up based on the address field, and finally
# fall back to creating a host on the fly to use for the var lookup
if delegated_host is None:
for h in self._inventory.get_hosts(ignore_limits=True, ignore_restrictions=True):
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
if h.address == delegated_host_name:
delegated_host = h
break
else:
delegated_host = Host(name=delegated_host_name)
else:
delegated_host = Host(name=delegated_host_name)
# now we go fetch the vars for the delegated-to host and save them in our
# master dictionary of variables to be used later in the TaskExecutor/PlayContext
delegated_host_vars[delegated_host_name] = self.get_vars(
play=play,
host=delegated_host,
task=task,
include_delegate_to=False,
include_hostvars=True,
)
delegated_host_vars[delegated_host_name]['inventory_hostname'] = vars_copy.get('inventory_hostname')
_ansible_loop_cache = None
if has_loop and cache_items:
# delegate_to templating produced a change, so we will cache the templated items
# in a special private hostvar
# this ensures that delegate_to+loop doesn't produce different results than TaskExecutor
# which may reprocess the loop
_ansible_loop_cache = items
return delegated_host_vars, _ansible_loop_cache
def clear_facts(self, hostname):
'''
Clears the facts for a host
'''
self._fact_cache.pop(hostname, None)
def set_host_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for host_facts should be a Mapping but is a %s" % type(facts))
try:
host_cache = self._fact_cache[host]
except KeyError:
# We get to set this as new
host_cache = facts
else:
if not isinstance(host_cache, MutableMapping):
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
' a {1}'.format(host, type(host_cache)))
# Update the existing facts
host_cache |= facts
# Save the facts back to the backing store
self._fact_cache[host] = host_cache
def set_nonpersistent_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for nonpersistent_facts should be a Mapping but is a %s" % type(facts))
try:
self._nonpersistent_fact_cache[host] |= facts
except KeyError:
self._nonpersistent_fact_cache[host] = facts
def set_host_variable(self, host, varname, value):
'''
Sets a value in the vars_cache for a host.
'''
if host not in self._vars_cache:
self._vars_cache[host] = dict()
if varname in self._vars_cache[host] and isinstance(self._vars_cache[host][varname], MutableMapping) and isinstance(value, MutableMapping):
self._vars_cache[host] = combine_vars(self._vars_cache[host], {varname: value})
else:
self._vars_cache[host][varname] = value
class VarsWithSources(MutableMapping):
'''
Dict-like class for vars that also provides source information for each var
This class can only store the source for top-level vars. It does no tracking
on its own, just shows a debug message with the information that it is provided
when a particular var is accessed.
'''
def __init__(self, *args, **kwargs):
''' Dict-compatible constructor '''
self.data = dict(*args, **kwargs)
self.sources = {}
@classmethod
def new_vars_with_sources(cls, data, sources):
''' Alternate constructor method to instantiate class with sources '''
v = cls(data)
v.sources = sources
return v
def get_source(self, key):
return self.sources.get(key, None)
def __getitem__(self, key):
val = self.data[key]
# See notes in the VarsWithSources docstring for caveats and limitations of the source tracking
display.debug("variable '%s' from source: %s" % (key, self.sources.get(key, "unknown")))
return val
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(self, key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
# Prevent duplicate debug messages by defining our own __contains__ pointing at the underlying dict
def __contains__(self, key):
return self.data.__contains__(key)
def copy(self):
return VarsWithSources.new_vars_with_sources(self.data.copy(), self.sources.copy())
def __or__(self, other):
if isinstance(other, MutableMapping):
c = self.data.copy()
c.update(other)
return c
return NotImplemented
def __ror__(self, other):
if isinstance(other, MutableMapping):
c = self.__class__()
c.update(other)
c.update(self.data)
return c
return NotImplemented
def __ior__(self, other):
self.data.update(other)
return self.data
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,474 |
Role parameter: change in variable precedence
|
### Summary
Hi ;
Since Ansible 8.0.0 (I think, I didn't check every ansible-core version), it seams that role parameters do not take precedence over already define facts.
According to the [documentation](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence), role (and include_role) params (line 20) should still take precedence over set_facts / registered vars (line 19). I also checked [Ansible 8.x changelogs](https://github.com/ansible-community/ansible-build-data/blob/main/8/CHANGELOG-v8.rst) but I didn't see anything about that, except maybe [this bug fix](https://github.com/ansible-community/ansible-build-data/blob/a8cf3895cd98246316ab6172ec684935e0013b45/8/CHANGELOG-v8.rst#L3397), I'm not sure what `Also adjusted the precedence to act the same as inline params` means and what are the expected impacts. But if this is a wanted new behavior, I feal like it should be details in major changes, not in the bug fix section.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Tested on Archlinux and also in python:3.11 docker container.
### Steps to Reproduce
Create `roles/test_set/tasks/main.yml` with:
```
---
- name: "Set test fact."
ansible.builtin.set_fact:
test_set_one: "set by test_set"
```
Create `roles/test_debug/tasks/main.yml` with:
```
---
- name: "Debug the variable."
ansible.builtin.debug:
var: test_set_one
```
Create `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
roles:
- test_set
- test_debug
- role: test_debug
test_set_one: "Set as role parameter"
```
Run: `ansible-playbook test.yml`.
### Expected Results
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "Set as role parameter"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This is a the result with ansible 7.x, here the corresponding version used to obtain it:
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_set : Set test fact.] ************************************************************************************************************************************************************************************
ok: [localhost]
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
TASK [test_debug : Debug the variable.] *****************************************************************************************************************************************************************************
ok: [localhost] => {
"test_set_one": "set by test_set"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81474
|
https://github.com/ansible/ansible/pull/82106
|
5ac62473b09405786ca08e00af4da6d5b3a8103d
|
20a54eb236a4f77402daa0d7cdaede358587c821
| 2023-08-09T09:06:12Z |
python
| 2023-11-06T14:18:35Z |
test/integration/targets/var_precedence/test_var_precedence.yml
|
---
- hosts: testhost
vars:
ansible_hostname: "BAD!"
vars_var: "vars_var"
param_var: "BAD!"
vars_files_var: "BAD!"
extra_var_override_once_removed: "{{ extra_var_override }}"
from_inventory_once_removed: "{{ inven_var | default('BAD!') }}"
vars_files:
- vars/test_var_precedence.yml
roles:
- { role: test_var_precedence, param_var: "param_var" }
tasks:
- name: register a result
command: echo 'BAD!'
register: registered_var
- name: use set_fact to override the registered_var
set_fact: registered_var="this is from set_fact"
- debug: var=extra_var
- debug: var=extra_var_override_once_removed
- debug: var=vars_var
- debug: var=vars_files_var
- debug: var=vars_files_var_role
- debug: var=registered_var
- debug: var=from_inventory_once_removed
- assert:
that: item
with_items:
- 'extra_var == "extra_var"'
- 'extra_var_override == "extra_var_override"'
- 'extra_var_override_once_removed == "extra_var_override"'
- 'vars_var == "vars_var"'
- 'vars_files_var == "vars_files_var"'
- 'vars_files_var_role == "vars_files_var_role3"'
- 'registered_var == "this is from set_fact"'
- 'from_inventory_once_removed == "inventory_var"'
- hosts: inven_overridehosts
vars_files:
- "test_var_precedence.yml"
roles:
- role: test_var_precedence_inven_override
foo: bar
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,590 |
ansible.builtin.dnf module ignore skip_broken setting
|
### Summary
`ansible.builtin.dnf` module ignore `skip_broken` setting
### Issue Type
Bug Report
### Component Name
ansible.builtin.dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora release 38 (Thirty Eight)
### Steps to Reproduce
I try to install packages to clean AlmaLinux 9.1 with ansible.builtin.dnf:
```
- name: "Test dnf"
ansible.builtin.dnf:
name:
- epel-release
- python3-mysqlclient
state: present
skip_broken: true
```
### Expected Results
In ideal, must be installed `epel-release` package and skipped `python3-mysqlclient` package (it is absent in standard repos)
### Actual Results
```console
I got:
fatal: [orion-ng]: FAILED! => {"changed": false, "failures": ["No package python3-mysqlclient available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
Alternative method work perfectly:
```
- name: "Test dnf"
ansible.builtin.command:
cmd: "dnf install --skip-broken -y epel-release python3-mysqlclient"
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80590
|
https://github.com/ansible/ansible/pull/80795
|
916a20fccd20140befb15ec6060153bbb1bb9eed
|
753866873113ba42e4f5772da86914a895add52e
| 2023-04-21T03:19:13Z |
python
| 2023-11-07T06:56:12Z |
changelogs/fragments/80590-dnf-skip_broken-unavailable-pkgs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,590 |
ansible.builtin.dnf module ignore skip_broken setting
|
### Summary
`ansible.builtin.dnf` module ignore `skip_broken` setting
### Issue Type
Bug Report
### Component Name
ansible.builtin.dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora release 38 (Thirty Eight)
### Steps to Reproduce
I try to install packages to clean AlmaLinux 9.1 with ansible.builtin.dnf:
```
- name: "Test dnf"
ansible.builtin.dnf:
name:
- epel-release
- python3-mysqlclient
state: present
skip_broken: true
```
### Expected Results
In ideal, must be installed `epel-release` package and skipped `python3-mysqlclient` package (it is absent in standard repos)
### Actual Results
```console
I got:
fatal: [orion-ng]: FAILED! => {"changed": false, "failures": ["No package python3-mysqlclient available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
Alternative method work perfectly:
```
- name: "Test dnf"
ansible.builtin.command:
cmd: "dnf install --skip-broken -y epel-release python3-mysqlclient"
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80590
|
https://github.com/ansible/ansible/pull/80795
|
916a20fccd20140befb15ec6060153bbb1bb9eed
|
753866873113ba42e4f5772da86914a895add52e
| 2023-04-21T03:19:13Z |
python
| 2023-11-07T06:56:12Z |
lib/ansible/modules/dnf.py
|
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
use_backend:
description:
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, dnf4, dnf5 ]
type: str
version_added: 2.15
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to an rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name >= 1.0).
Spaces around the operator are required.
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
aliases:
- pkg
type: list
elements: str
default: []
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks.
Use M(ansible.builtin.package_facts) instead of the O(list) argument as a best practice.
type: str
state:
description:
- Whether to install (V(present), V(latest)), or remove (V(absent)) a package.
- Default is V(None), however in effect the default action is V(present) unless the O(autoremove) option is
enabled for this module, then V(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if O(state) is V(present) or V(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If V(true), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when O(state) is V(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
default: []
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if O(state) is V(present) or V(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if O(state) is V(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to V(true), and O(state=latest) then only installs updates that have been marked security related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to V(true), and O(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
default: []
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
default: []
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to V(all), disables all excludes.
- If set to V(main), disable excludes defined in [main] in dnf.conf.
- If set to V(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to V(false), the SSL certificates will not be validated.
- This should only set to V(false) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to V(false) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the M(ansible.builtin.yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if O(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If V(true) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
ansible.builtin.dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.dnf:
name: httpd >= 2.4
state: present
- name: Install the latest version of Apache and MariaDB
ansible.builtin.dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
ansible.builtin.dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
ansible.builtin.dnf:
name: "*"
state: latest
- name: Update the webserver, depending on which is installed on the system. Do not install the other one
ansible.builtin.dnf:
name:
- httpd
- nginx
state: latest
update_only: yes
- name: Install the nginx rpm from a remote repo
ansible.builtin.dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
ansible.builtin.dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
ansible.builtin.dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
ansible.builtin.dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
ansible.builtin.dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
ansible.builtin.dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
ansible.builtin.dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
ansible.builtin.dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
ansible.builtin.dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# because we need AnsibleModule object to use get_best_parsable_locale()
# to set proper locale before importing dnf to be able to scrape
# the output in some cases (FIXME?).
dnf = None
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
# envra format for alignment with the yum module
result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result)
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
result['nevra'] = result['envra']
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _split_package_arch(self, packagename):
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
name, delimiter, arch = packagename.rpartition('.')
if name and arch and arch in redhat_rpm_arches:
return name, arch
return packagename, None
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
nevr, arch = self._split_package_arch(packagename)
if arch:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
return rc
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = locale
os.environ['LANGUAGE'] = os.environ['LANG'] = locale
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.package
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/', sslverify=True):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set certificate validation
conf.sslverify = sslverify
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
if conf.substitutions.get('releasever') is None:
self.module.warn(
'Unable to detect release version (use "releasever" option to specify release version)'
)
# values of conf.substitutions are expected to be strings
# setting this to an empty string instead of None appears to mimic the DNF CLI behavior
conf.substitutions['releasever'] = ''
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
for repo in base.repos.iter_enabled():
if self.disable_gpg_check:
repo.gpgcheck = False
repo.repo_gpgcheck = False
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot, sslverify):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot, sslverify)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
add_security_filters = getattr(base, "add_security_filters", None)
if callable(add_security_filters):
filters = {}
if self.bugfix:
filters.setdefault('types', []).append('bugfix')
if self.security:
filters.setdefault('types', []).append('security')
if filters:
add_security_filters('eq', **filters)
else:
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
package_spec = {}
name, arch = self._split_package_arch(pkg)
if arch:
package_spec['arch'] = arch
package_details = self._packagename_dict(pkg)
if package_details:
package_details['epoch'] = int(package_details['epoch'])
package_spec.update(package_details)
else:
package_spec['name'] = name
return bool(installed.filter(**package_spec))
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
return evr_cmp == 1
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed: # Case 1
# TODO: Is this case reachable?
#
# _is_installed() demands a name (*not* NVR) or else is always False
# (wildcards are treated literally).
#
# Meanwhile, _is_newer_version_installed() demands something versioned
# or else is always false.
#
# I fail to see how they can both be true at the same time for any
# given pkg_spec. -re
self.base.upgrade(pkg_spec)
else: # Case 2
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 3
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 4, Nothing to do, report back
pass
elif is_installed: # A potentially older (or same) version is installed
if upgrade: # Case 5
self.base.upgrade(pkg_spec)
else: # Case 6, Nothing to do, report back
pass
else: # Case 7, The package is not installed, simply install it
self.base.install(pkg_spec, strict=self.base.conf.strict)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(
self._package_dict(pkg)["nevra"] if isinstance(pkg, dnf.package.Package) else pkg
):
try:
if isinstance(pkg, dnf.package.Package):
self.base.package_upgrade(pkg)
else:
self.base.upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg, strict=self.base.conf.strict)
else:
self.base.package_install(pkg, strict=self.base.conf.strict)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# Previously we forced base.conf.best=True here.
# However in 2.11+ there is a self.nobest option, so defer to that.
# Note, however, that just because nobest isn't set, doesn't mean that
# base.conf.best is actually true. We only force it false in
# _configure_base(), we never set it to true, and it can default to false.
# Thus, we still need to explicitly set it here.
self.base.conf.best = not self.nobest
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
# NOTE for people who go down the rabbit hole of figuring out why
# resolve() throws DepsolveError here on dep conflict, but not when
# called from the CLI: It's controlled by conf.best. When best is
# set, Hawkey will fail the goal, and resolve() in dnf.base.Base
# will throw. Otherwise if it's not set, the update (install) will
# be (almost silently) removed from the goal, and Hawkey will report
# success. Note that in this case, similar to the CLI, skip_broken
# does nothing to help here, so we don't take it into account at
# all.
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}: {1}'.format(package, gpgerr)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
tid = self.base.do_transaction()
if tid is not None:
transaction = self.base.history.old([tid])[0]
if transaction.return_code:
failure_response['failures'].append(transaction.output())
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not self.download_only and not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'dnf4', 'dnf5'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,590 |
ansible.builtin.dnf module ignore skip_broken setting
|
### Summary
`ansible.builtin.dnf` module ignore `skip_broken` setting
### Issue Type
Bug Report
### Component Name
ansible.builtin.dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora release 38 (Thirty Eight)
### Steps to Reproduce
I try to install packages to clean AlmaLinux 9.1 with ansible.builtin.dnf:
```
- name: "Test dnf"
ansible.builtin.dnf:
name:
- epel-release
- python3-mysqlclient
state: present
skip_broken: true
```
### Expected Results
In ideal, must be installed `epel-release` package and skipped `python3-mysqlclient` package (it is absent in standard repos)
### Actual Results
```console
I got:
fatal: [orion-ng]: FAILED! => {"changed": false, "failures": ["No package python3-mysqlclient available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
Alternative method work perfectly:
```
- name: "Test dnf"
ansible.builtin.command:
cmd: "dnf install --skip-broken -y epel-release python3-mysqlclient"
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80590
|
https://github.com/ansible/ansible/pull/80795
|
916a20fccd20140befb15ec6060153bbb1bb9eed
|
753866873113ba42e4f5772da86914a895add52e
| 2023-04-21T03:19:13Z |
python
| 2023-11-07T06:56:12Z |
test/integration/targets/dnf/tasks/skip_broken_and_nobest.yml
|
# Tests for skip_broken and allowerasing
# (and a bit of nobest because the test case is too good to pass up)
#
# There are a lot of fairly complex, corner cases we test here especially towards the end.
#
# The test repo is generated from the "skip-broken" repo in this repository:
# https://github.com/relrod/ansible-ci-contrived-yum-repos
#
# It is laid out like this:
#
# There are three packages, `broken-a`, `broken-b`, `broken-c`.
#
# * broken-a has three versions: 1.2.3, 1.2.3.4, 1.2.4, 2.0.0.
# * 1.2.3 and 1.2.4 have no dependencies
# * 1.2.3.4 and 2.0.0 both depend on a non-existent package (to break depsolving)
#
# * broken-b depends on broken-a-1.2.3
# * broken-c depends on broken-a-1.2.4
# * broken-d depends on broken-a (no version constraint)
#
# This allows us to test various upgrades, downgrades, and installs with broken dependencies.
# skip_broken should usually be successful in the upgrade/downgrade case, it will just do nothing.
#
# There is a nobest testcase or two thrown in, simply because this organization provides a very
# good test case for that behavior as well. For example, just installing "broken-a" with no version
# will try to install 2.0.0 which is broken. With nobest=true, it will fall back to 1.2.4. Similar
# for upgrading.
- block:
- name: Set up test yum repo
yum_repository:
name: skip-broken
description: ansible-test skip-broken test repo
baseurl: "{{ skip_broken_repo_baseurl }}"
gpgcheck: no
repo_gpgcheck: no
- name: Install two packages
dnf:
name:
- broken-a-1.2.3
- broken-b
# This will fail. We have broken-a-1.2.3, and broken-b with a strong
# dependency on it. broken-c has a strong dependency on broken-a-1.2.4.
# Since installing that would break broken-b, we get a conflict.
- name: Try installing a third package, intentionally broken
dnf:
name:
- broken-c
ignore_errors: true
register: dnf_fail
- assert:
that:
- dnf_fail is failed
- "'Depsolve Error' in dnf_fail.msg"
# skip_broken should still install nothing because the conflict is
# still an issue. But it should skip over the broken packages and not
# fail.
- name: Try installing it with skip_broken
dnf:
name:
- broken-c
skip_broken: true
register: skip_broken_res
- name: Assert that nothing got installed
assert:
that:
- skip_broken_res.msg == 'Nothing to do'
- skip_broken_res.rc == 0
- skip_broken_res.results == []
- name: Remove all test packages
dnf:
name:
- broken-*
state: absent
# broken-d depends on (unversioned) broken-a.
# broken-a-2.0.0 has a broken dependency that doesn't exist.
# skip_broken should cause us to skip our explicit broken-a-2.0.0
# and bring in broken-a-1.2.4 as a dep of broken-d.
- name: Ensure proper failure with explicit broken version
dnf:
name:
- broken-a-2.0.0
- broken-d
ignore_errors: true
register: dnf_fail
- name: Assert that nothing got installed
assert:
that:
- dnf_fail is failed
- "'Depsolve Error' in dnf_fail.msg"
- name: skip_broken with explicit version
dnf:
name:
- broken-a-2.0.0
- broken-d
skip_broken: true
register: skip_broken_res
- name: Assert that the right things got installed
assert:
that:
- skip_broken_res.rc == 0
- skip_broken_res.results|length == 2
- res.results|select("contains", "Installed: broken-a-1.2.4")|length > 0
- res.results|select("contains", "Installed: broken-d-1.2.5")|length > 0
- name: Remove all test packages
dnf:
name:
- broken-*
state: absent
# Walk the logic of _mark_package_install() here
# We need to use a full-ish NVR/wildcard. _is_newer_version_installed()
# will be false otherwise, no matter what. This might be a bug.
# Relatedly, the real "Case 1" in the code seemingly can't be reached:
# _is_newer_version_installed wants NVR, _is_installed wants name.
# Both can't be true at the same time given one pkg_spec. Thus, we start
# at "Case 2"
# prereq
- name: Install broken-a-1.2.4
dnf:
name:
- broken-a-1.2.4
state: present
# Case 2: newer version is installed, allow_downgrade is true,
# is_installed is false since we gave full NVR.
# "upgrade" to broken-a-1.2.3, allow_downgrade=true
- name: Do an "upgrade" to an older version of broken-a, allow_downgrade=true
dnf:
name:
- broken-a-1.2.3-1*
state: latest
allow_downgrade: true
check_mode: true
register: res
- assert:
that:
- res is changed
- res.results|select("contains", "Installed: broken-a-1.2.3")|length > 0
# Still case 2, but with broken package to test skip_broken
# skip_broken: false
- name: Do an "upgrade" to an older known broken version of broken-a, allow_downgrade=true, skip_broken=false
dnf:
name:
- broken-a-1.2.3.4-1*
state: latest
allow_downgrade: true
check_mode: true
ignore_errors: true
register: res
- assert:
that:
# 1.2.3.4 has non-existent dep. Fail without skip_broken.
- res is failed
- "'Depsolve Error' in res.msg"
# skip_broken: true
- name: Do an "upgrade" to an older known broken version of broken-a, allow_downgrade=true, skip_broken=true
dnf:
name:
- broken-a-1.2.3.4-1*
state: latest
allow_downgrade: true
skip_broken: true
check_mode: true
register: res
- assert:
that:
- res is not changed
- res.rc == 0
- res.msg == "Nothing to do"
# Case 3: newer version installed, allow_downgrade=true, but
# upgrade=false (i.e., state: present or installed)
- name: Install an older version of broken-a than currently installed
dnf:
name:
- broken-a-1.2.3-1*
state: present
allow_downgrade: true
check_mode: true
register: res
- assert:
that:
- res is changed
- res.results|select("contains", "Installed: broken-a-1.2.3")|length > 0
# Case 3 still, with broken package and skip_broken tests like above.
- name: Install an older known broken version of broken-a, allow_downgrade=true, skip_broken=false
dnf:
name:
- broken-a-1.2.3.4-1*
state: present
allow_downgrade: true
check_mode: true
ignore_errors: true
register: res
- assert:
that:
# 1.2.3.4 has non-existent dep. Fail without skip_broken.
- res is failed
- "'Depsolve Error' in res.msg"
# skip_broken: true
- name: Install an older known broken version of broken-a, allow_downgrade=true, skip_broken=true
dnf:
name:
- broken-a-1.2.3.4-1*
state: present
allow_downgrade: true
skip_broken: true
check_mode: true
register: res
- assert:
that:
- res is not changed
- res.rc == 0
- res.msg == "Nothing to do"
# Case 4: "upgrade" to broken-a-1.2.3, allow_downgrade=false
# is_newer_version_installed is true, allow_downgrade is false
- name: Do an "upgrade" to an older version of broken-a, allow_downgrade=false
dnf:
name:
#- broken-a-1.2.3-1*
- broken-a-1.2.3-1.el7.x86_64
state: latest
allow_downgrade: false
check_mode: true
register: res
- assert:
that:
- res is not changed
- res.rc == 0
- res.msg == "Nothing to do"
# skip_broken doesn't apply to case 5 or 6 (older version installed).
# base.upgrade doesn't allow a strict= kwarg. However, nobest works here.
# Case 5: older version of package is installed, we specify name, no version
# otherwise we'd land in an earlier case. At this point, 1.2.4 is installed.
# broken-a-2.0.0 is available as an update but has a broken dependency.
- name: Update broken-a without nobest=true
dnf:
name:
- broken-a
state: latest
ignore_errors: true
register: dnf_fail
- assert:
that:
- dnf_fail is failed
- "'Depsolve Error' in dnf_fail.msg"
# With nobest: true, we will be "successful" but not actually perform
# any upgrade. That is, we are content not having the "best"/latest
# version.
- name: Update broken-a with nobest=true
dnf:
name:
- broken-a
state: latest
nobest: true
register: nobest
- assert:
that:
- nobest.rc == 0
- nobest.results == []
# Case 6: Current or older version already installed (no version specified
# in our pkg_spec) and we're requesting present, not latest.
#
# This isn't really relevant to skip_broken or nobest, but let's test it
# anyway since we're already walking the logic of the method.
- name: Install broken-a (even though it is already installed)
dnf:
name:
- broken-a
state: present
register: res
- assert:
that:
- res is not changed
# Case 7 is already tested quite extensively above in the earlier tests.
always:
- name: Remove test yum repo
yum_repository:
name: skip-broken
state: absent
- name: Remove all test packages installed
yum:
name:
- broken-*
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,077 |
Iptables --set-dscp unknown option for iptables version v1.8.7 (nf_tables)
|
### Summary
When i try to use set_dscp_mark in ansible.builtin.iptables got this error
```
FAILED! => {"changed": false, "cmd": "/usr/sbin/iptables -t mangle -A POSTROUTING -p udp --destination-port 7777 --set-dscp 399962", "msg": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.", "rc": 2, "stderr": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.\n", "stderr_lines": ["iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"", "Try `iptables -h' or 'iptables --help' for more information."], "stdout": "", "stdout_lines": []}
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.iptables
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.2 (main, Jan 15 2022, 19:56:27) [GCC 11.1.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
nothing
```
### OS / Environment
Host
Linux XXX 5.15.21-1-MANJARO #1 SMP PREEMPT Sun Feb 6 12:21:42 UTC 2022 x86_64 GNU/Linux
Guest
Linux debian 5.10.0-11-amd64 #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/Linux
### Steps to Reproduce
```
- name: Modify mangle table to mark packet for queue discipline on port 7777
ansible.builtin.iptables:
table: mangle
chain: POSTROUTING
protocol: udp
destination_port: 7777
set_dscp_mark: 6666:2
```
### Expected Results
I expect that the iptables rule was installed on the system
### Actual Results
```console
fatal: [tcvm]: FAILED! => {"changed": false, "cmd": "/usr/sbin/iptables -t mangle -A POSTROUTING -p udp --destination-port 7777 --set-dscp 399962", "msg": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.", "rc": 2, "stderr": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.\n", "stderr_lines": ["iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"", "Try `iptables -h' or 'iptables --help' for more information."], "stdout": "", "stdout_lines": []}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77077
|
https://github.com/ansible/ansible/pull/82145
|
567c78f9a1c7d6a5326dcd63a2a69ba9db6c3a6d
|
40baf5eace3848cd99b43a7c6732048c6072da60
| 2022-02-20T18:08:31Z |
python
| 2023-11-07T15:19:25Z |
changelogs/fragments/77077_iptables.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,077 |
Iptables --set-dscp unknown option for iptables version v1.8.7 (nf_tables)
|
### Summary
When i try to use set_dscp_mark in ansible.builtin.iptables got this error
```
FAILED! => {"changed": false, "cmd": "/usr/sbin/iptables -t mangle -A POSTROUTING -p udp --destination-port 7777 --set-dscp 399962", "msg": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.", "rc": 2, "stderr": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.\n", "stderr_lines": ["iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"", "Try `iptables -h' or 'iptables --help' for more information."], "stdout": "", "stdout_lines": []}
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.iptables
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.2 (main, Jan 15 2022, 19:56:27) [GCC 11.1.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
nothing
```
### OS / Environment
Host
Linux XXX 5.15.21-1-MANJARO #1 SMP PREEMPT Sun Feb 6 12:21:42 UTC 2022 x86_64 GNU/Linux
Guest
Linux debian 5.10.0-11-amd64 #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/Linux
### Steps to Reproduce
```
- name: Modify mangle table to mark packet for queue discipline on port 7777
ansible.builtin.iptables:
table: mangle
chain: POSTROUTING
protocol: udp
destination_port: 7777
set_dscp_mark: 6666:2
```
### Expected Results
I expect that the iptables rule was installed on the system
### Actual Results
```console
fatal: [tcvm]: FAILED! => {"changed": false, "cmd": "/usr/sbin/iptables -t mangle -A POSTROUTING -p udp --destination-port 7777 --set-dscp 399962", "msg": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.", "rc": 2, "stderr": "iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"\nTry `iptables -h' or 'iptables --help' for more information.\n", "stderr_lines": ["iptables v1.8.7 (nf_tables): unknown option \"--set-dscp\"", "Try `iptables -h' or 'iptables --help' for more information."], "stdout": "", "stdout_lines": []}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77077
|
https://github.com/ansible/ansible/pull/82145
|
567c78f9a1c7d6a5326dcd63a2a69ba9db6c3a6d
|
40baf5eace3848cd99b43a7c6732048c6072da60
| 2022-02-20T18:08:31Z |
python
| 2023-11-07T15:19:25Z |
lib/ansible/modules/iptables.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Linus Unnebäck <[email protected]>
# Copyright: (c) 2017, Sébastien DA ROCHA <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
DOCUMENTATION = r'''
---
module: iptables
short_description: Modify iptables rules
version_added: "2.0"
author:
- Linus Unnebäck (@LinusU) <[email protected]>
- Sébastien DA ROCHA (@sebastiendarocha)
description:
- M(ansible.builtin.iptables) is used to set up, maintain, and inspect the tables of IP packet
filter rules in the Linux kernel.
- This module does not handle the saving and/or loading of rules, but rather
only manipulates the current rules that are present in memory. This is the
same as the behaviour of the C(iptables) and C(ip6tables) command which
this module uses internally.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: linux
notes:
- This module just deals with individual rules. If you need advanced
chaining of rules the recommended way is to template the iptables restore
file.
options:
table:
description:
- This option specifies the packet matching table which the command should operate on.
- If the kernel is configured with automatic module loading, an attempt will be made
to load the appropriate module for that table if it is not already there.
type: str
choices: [ filter, nat, mangle, raw, security ]
default: filter
state:
description:
- Whether the rule should be absent or present.
type: str
choices: [ absent, present ]
default: present
action:
description:
- Whether the rule should be appended at the bottom or inserted at the top.
- If the rule already exists the chain will not be modified.
type: str
choices: [ append, insert ]
default: append
version_added: "2.2"
rule_num:
description:
- Insert the rule as the given rule number.
- This works only with O(action=insert).
type: str
version_added: "2.5"
ip_version:
description:
- Which version of the IP protocol this rule should apply to.
type: str
choices: [ ipv4, ipv6 ]
default: ipv4
chain:
description:
- Specify the iptables chain to modify.
- This could be a user-defined chain or one of the standard iptables chains, like
V(INPUT), V(FORWARD), V(OUTPUT), V(PREROUTING), V(POSTROUTING), V(SECMARK) or V(CONNSECMARK).
type: str
protocol:
description:
- The protocol of the rule or of the packet to check.
- The specified protocol can be one of V(tcp), V(udp), V(udplite), V(icmp), V(ipv6-icmp) or V(icmpv6),
V(esp), V(ah), V(sctp) or the special keyword V(all), or it can be a numeric value,
representing one of these protocols or a different one.
- A protocol name from C(/etc/protocols) is also allowed.
- A V(!) argument before the protocol inverts the test.
- The number zero is equivalent to all.
- V(all) will match with all protocols and is taken as default when this option is omitted.
type: str
source:
description:
- Source specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A V(!) argument before the
address specification inverts the sense of the address.
type: str
destination:
description:
- Destination specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A V(!) argument before the
address specification inverts the sense of the address.
type: str
tcp_flags:
description:
- TCP flags specification.
- O(tcp_flags) expects a dict with the two keys C(flags) and C(flags_set).
type: dict
version_added: "2.4"
suboptions:
flags:
description:
- List of flags you want to examine.
type: list
elements: str
flags_set:
description:
- Flags to be set.
type: list
elements: str
match:
description:
- Specifies a match to use, that is, an extension module that tests for
a specific property.
- The set of matches make up the condition under which a target is invoked.
- Matches are evaluated first to last if specified as an array and work in short-circuit
fashion, i.e. if one extension yields false, evaluation will stop.
type: list
elements: str
default: []
jump:
description:
- This specifies the target of the rule; i.e., what to do if the packet matches it.
- The target can be a user-defined chain (other than the one
this rule is in), one of the special builtin targets which decide the
fate of the packet immediately, or an extension (see EXTENSIONS
below).
- If this option is omitted in a rule (and the goto parameter
is not used), then matching the rule will have no effect on the
packet's fate, but the counters on the rule will be incremented.
type: str
gateway:
description:
- This specifies the IP address of host to send the cloned packets.
- This option is only valid when O(jump) is set to V(TEE).
type: str
version_added: "2.8"
log_prefix:
description:
- Specifies a log text for the rule. Only make sense with a LOG jump.
type: str
version_added: "2.5"
log_level:
description:
- Logging level according to the syslogd-defined priorities.
- The value can be strings or numbers from 1-8.
- This parameter is only applicable if O(jump) is set to V(LOG).
type: str
version_added: "2.8"
choices: [ '0', '1', '2', '3', '4', '5', '6', '7', 'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug' ]
goto:
description:
- This specifies that the processing should continue in a user specified chain.
- Unlike the jump argument return will not continue processing in
this chain but instead in the chain that called us via jump.
type: str
in_interface:
description:
- Name of an interface via which a packet was received (only for packets
entering the V(INPUT), V(FORWARD) and V(PREROUTING) chains).
- When the V(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a V(+), then any interface which begins with
this name will match.
- If this option is omitted, any interface name will match.
type: str
out_interface:
description:
- Name of an interface via which a packet is going to be sent (for
packets entering the V(FORWARD), V(OUTPUT) and V(POSTROUTING) chains).
- When the V(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a V(+), then any interface which begins
with this name will match.
- If this option is omitted, any interface name will match.
type: str
fragment:
description:
- This means that the rule only refers to second and further fragments
of fragmented packets.
- Since there is no way to tell the source or destination ports of such
a packet (or ICMP type), such a packet will not match any rules which specify them.
- When the "!" argument precedes fragment argument, the rule will only match head fragments,
or unfragmented packets.
type: str
set_counters:
description:
- This enables the administrator to initialize the packet and byte
counters of a rule (during V(INSERT), V(APPEND), V(REPLACE) operations).
type: str
source_port:
description:
- Source port or port range specification.
- This can either be a service name or a port number.
- An inclusive range can also be specified, using the format C(first:last).
- If the first port is omitted, V(0) is assumed; if the last is omitted, V(65535) is assumed.
- If the first port is greater than the second one they will be swapped.
type: str
destination_port:
description:
- "Destination port or port range specification. This can either be
a service name or a port number. An inclusive range can also be
specified, using the format first:last. If the first port is omitted,
'0' is assumed; if the last is omitted, '65535' is assumed. If the
first port is greater than the second one they will be swapped.
This is only valid if the rule also specifies one of the following
protocols: tcp, udp, dccp or sctp."
type: str
destination_ports:
description:
- This specifies multiple destination port numbers or port ranges to match in the multiport module.
- It can only be used in conjunction with the protocols tcp, udp, udplite, dccp and sctp.
type: list
elements: str
default: []
version_added: "2.11"
to_ports:
description:
- This specifies a destination port or range of ports to use, without
this, the destination port is never altered.
- This is only valid if the rule also specifies one of the protocol
V(tcp), V(udp), V(dccp) or V(sctp).
type: str
to_destination:
description:
- This specifies a destination address to use with C(DNAT).
- Without this, the destination address is never altered.
type: str
version_added: "2.1"
to_source:
description:
- This specifies a source address to use with C(SNAT).
- Without this, the source address is never altered.
type: str
version_added: "2.2"
syn:
description:
- This allows matching packets that have the SYN bit set and the ACK
and RST bits unset.
- When negated, this matches all packets with the RST or the ACK bits set.
type: str
choices: [ ignore, match, negate ]
default: ignore
version_added: "2.5"
set_dscp_mark:
description:
- This allows specifying a DSCP mark to be added to packets.
It takes either an integer or hex value.
- Mutually exclusive with O(set_dscp_mark_class).
type: str
version_added: "2.1"
set_dscp_mark_class:
description:
- This allows specifying a predefined DiffServ class which will be
translated to the corresponding DSCP mark.
- Mutually exclusive with O(set_dscp_mark).
type: str
version_added: "2.1"
comment:
description:
- This specifies a comment that will be added to the rule.
type: str
ctstate:
description:
- A list of the connection states to match in the conntrack module.
- Possible values are V(INVALID), V(NEW), V(ESTABLISHED), V(RELATED), V(UNTRACKED), V(SNAT), V(DNAT).
type: list
elements: str
default: []
src_range:
description:
- Specifies the source IP range to match in the iprange module.
type: str
version_added: "2.8"
dst_range:
description:
- Specifies the destination IP range to match in the iprange module.
type: str
version_added: "2.8"
match_set:
description:
- Specifies a set name which can be defined by ipset.
- Must be used together with the match_set_flags parameter.
- When the V(!) argument is prepended then it inverts the rule.
- Uses the iptables set extension.
type: str
version_added: "2.11"
match_set_flags:
description:
- Specifies the necessary flags for the match_set parameter.
- Must be used together with the match_set parameter.
- Uses the iptables set extension.
type: str
choices: [ "src", "dst", "src,dst", "dst,src" ]
version_added: "2.11"
limit:
description:
- Specifies the maximum average number of matches to allow per second.
- The number can specify units explicitly, using C(/second), C(/minute),
C(/hour) or C(/day), or parts of them (so V(5/second) is the same as
V(5/s)).
type: str
limit_burst:
description:
- Specifies the maximum burst before the above limit kicks in.
type: str
version_added: "2.1"
uid_owner:
description:
- Specifies the UID or username to use in match by owner rule.
- From Ansible 2.6 when the C(!) argument is prepended then the it inverts
the rule to apply instead to all users except that one specified.
type: str
version_added: "2.1"
gid_owner:
description:
- Specifies the GID or group to use in match by owner rule.
type: str
version_added: "2.9"
reject_with:
description:
- 'Specifies the error packet type to return while rejecting. It implies
"jump: REJECT".'
type: str
version_added: "2.1"
icmp_type:
description:
- This allows specification of the ICMP type, which can be a numeric
ICMP type, type/code pair, or one of the ICMP type names shown by the
command 'iptables -p icmp -h'
type: str
version_added: "2.2"
flush:
description:
- Flushes the specified table and chain of all rules.
- If no chain is specified then the entire table is purged.
- Ignores all other parameters.
type: bool
default: false
version_added: "2.2"
policy:
description:
- Set the policy for the chain to the given target.
- Only built-in chains can have policies.
- This parameter requires the O(chain) parameter.
- If you specify this parameter, all other parameters will be ignored.
- This parameter is used to set default policy for the given O(chain).
Do not confuse this with O(jump) parameter.
type: str
choices: [ ACCEPT, DROP, QUEUE, RETURN ]
version_added: "2.2"
wait:
description:
- Wait N seconds for the xtables lock to prevent multiple instances of
the program from running concurrently.
type: str
version_added: "2.10"
chain_management:
description:
- If V(true) and O(state) is V(present), the chain will be created if needed.
- If V(true) and O(state) is V(absent), the chain will be deleted if the only
other parameter passed are O(chain) and optionally O(table).
type: bool
default: false
version_added: "2.13"
numeric:
description:
- This parameter controls the running of the list -action of iptables, which is used internally by the module
- Does not affect the actual functionality. Use this if iptables hangs when creating chain or altering policy
- If V(true), then iptables skips the DNS-lookup of the IP addresses in a chain when it uses the list -action
- Listing is used internally for example when setting a policy or creting of a chain
type: bool
default: false
version_added: "2.15"
'''
EXAMPLES = r'''
- name: Block specific IP
ansible.builtin.iptables:
chain: INPUT
source: 8.8.8.8
jump: DROP
become: yes
- name: Forward port 80 to 8600
ansible.builtin.iptables:
table: nat
chain: PREROUTING
in_interface: eth0
protocol: tcp
match: tcp
destination_port: 80
jump: REDIRECT
to_ports: 8600
comment: Redirect web traffic to port 8600
become: yes
- name: Allow related and established connections
ansible.builtin.iptables:
chain: INPUT
ctstate: ESTABLISHED,RELATED
jump: ACCEPT
become: yes
- name: Allow new incoming SYN packets on TCP port 22 (SSH)
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 22
ctstate: NEW
syn: match
jump: ACCEPT
comment: Accept new SSH connections.
- name: Match on IP ranges
ansible.builtin.iptables:
chain: FORWARD
src_range: 192.168.1.100-192.168.1.199
dst_range: 10.0.0.1-10.0.0.50
jump: ACCEPT
- name: Allow source IPs defined in ipset "admin_hosts" on port 22
ansible.builtin.iptables:
chain: INPUT
match_set: admin_hosts
match_set_flags: src
destination_port: 22
jump: ALLOW
- name: Tag all outbound tcp packets with DSCP mark 8
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark: 8
protocol: tcp
- name: Tag all outbound tcp packets with DSCP DiffServ class CS1
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark_class: CS1
protocol: tcp
# Create the user-defined chain ALLOWLIST
- iptables:
chain: ALLOWLIST
chain_management: true
# Delete the user-defined chain ALLOWLIST
- iptables:
chain: ALLOWLIST
chain_management: true
state: absent
- name: Insert a rule on line 5
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 8080
jump: ACCEPT
action: insert
rule_num: 5
# Think twice before running following task as this may lock target system
- name: Set the policy for the INPUT chain to DROP
ansible.builtin.iptables:
chain: INPUT
policy: DROP
- name: Reject tcp with tcp-reset
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
reject_with: tcp-reset
ip_version: ipv4
- name: Set tcp flags
ansible.builtin.iptables:
chain: OUTPUT
jump: DROP
protocol: tcp
tcp_flags:
flags: ALL
flags_set:
- ACK
- RST
- SYN
- FIN
- name: Iptables flush filter
ansible.builtin.iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: Iptables flush nat
ansible.builtin.iptables:
table: nat
chain: '{{ item }}'
flush: yes
with_items: [ 'INPUT', 'OUTPUT', 'PREROUTING', 'POSTROUTING' ]
- name: Log packets arriving into an user-defined chain
ansible.builtin.iptables:
chain: LOGGING
action: append
state: present
limit: 2/second
limit_burst: 20
log_prefix: "IPTABLES:INFO: "
log_level: info
- name: Allow connections on multiple ports
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_ports:
- "80"
- "443"
- "8081:8083"
jump: ACCEPT
'''
import re
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
IPTABLES_WAIT_SUPPORT_ADDED = '1.4.20'
IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED = '1.6.0'
BINS = dict(
ipv4='iptables',
ipv6='ip6tables',
)
ICMP_TYPE_OPTIONS = dict(
ipv4='--icmp-type',
ipv6='--icmpv6-type',
)
def append_param(rule, param, flag, is_list):
if is_list:
for item in param:
append_param(rule, item, flag, False)
else:
if param is not None:
if param[0] == '!':
rule.extend(['!', flag, param[1:]])
else:
rule.extend([flag, param])
def append_tcp_flags(rule, param, flag):
if param:
if 'flags' in param and 'flags_set' in param:
rule.extend([flag, ','.join(param['flags']), ','.join(param['flags_set'])])
def append_match_flag(rule, param, flag, negatable):
if param == 'match':
rule.extend([flag])
elif negatable and param == 'negate':
rule.extend(['!', flag])
def append_csv(rule, param, flag):
if param:
rule.extend([flag, ','.join(param)])
def append_match(rule, param, match):
if param:
rule.extend(['-m', match])
def append_jump(rule, param, jump):
if param:
rule.extend(['-j', jump])
def append_wait(rule, param, flag):
if param:
rule.extend([flag, param])
def construct_rule(params):
rule = []
append_wait(rule, params['wait'], '-w')
append_param(rule, params['protocol'], '-p', False)
append_param(rule, params['source'], '-s', False)
append_param(rule, params['destination'], '-d', False)
append_param(rule, params['match'], '-m', True)
append_tcp_flags(rule, params['tcp_flags'], '--tcp-flags')
append_param(rule, params['jump'], '-j', False)
if params.get('jump') and params['jump'].lower() == 'tee':
append_param(rule, params['gateway'], '--gateway', False)
append_param(rule, params['log_prefix'], '--log-prefix', False)
append_param(rule, params['log_level'], '--log-level', False)
append_param(rule, params['to_destination'], '--to-destination', False)
append_match(rule, params['destination_ports'], 'multiport')
append_csv(rule, params['destination_ports'], '--dports')
append_param(rule, params['to_source'], '--to-source', False)
append_param(rule, params['goto'], '-g', False)
append_param(rule, params['in_interface'], '-i', False)
append_param(rule, params['out_interface'], '-o', False)
append_param(rule, params['fragment'], '-f', False)
append_param(rule, params['set_counters'], '-c', False)
append_param(rule, params['source_port'], '--source-port', False)
append_param(rule, params['destination_port'], '--destination-port', False)
append_param(rule, params['to_ports'], '--to-ports', False)
append_param(rule, params['set_dscp_mark'], '--set-dscp', False)
append_param(
rule,
params['set_dscp_mark_class'],
'--set-dscp-class',
False)
append_match_flag(rule, params['syn'], '--syn', True)
if 'conntrack' in params['match']:
append_csv(rule, params['ctstate'], '--ctstate')
elif 'state' in params['match']:
append_csv(rule, params['ctstate'], '--state')
elif params['ctstate']:
append_match(rule, params['ctstate'], 'conntrack')
append_csv(rule, params['ctstate'], '--ctstate')
if 'iprange' in params['match']:
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
elif params['src_range'] or params['dst_range']:
append_match(rule, params['src_range'] or params['dst_range'], 'iprange')
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
if 'set' in params['match']:
append_param(rule, params['match_set'], '--match-set', False)
append_match_flag(rule, 'match', params['match_set_flags'], False)
elif params['match_set']:
append_match(rule, params['match_set'], 'set')
append_param(rule, params['match_set'], '--match-set', False)
append_match_flag(rule, 'match', params['match_set_flags'], False)
append_match(rule, params['limit'] or params['limit_burst'], 'limit')
append_param(rule, params['limit'], '--limit', False)
append_param(rule, params['limit_burst'], '--limit-burst', False)
append_match(rule, params['uid_owner'], 'owner')
append_match_flag(rule, params['uid_owner'], '--uid-owner', True)
append_param(rule, params['uid_owner'], '--uid-owner', False)
append_match(rule, params['gid_owner'], 'owner')
append_match_flag(rule, params['gid_owner'], '--gid-owner', True)
append_param(rule, params['gid_owner'], '--gid-owner', False)
if params['jump'] is None:
append_jump(rule, params['reject_with'], 'REJECT')
append_param(rule, params['reject_with'], '--reject-with', False)
append_param(
rule,
params['icmp_type'],
ICMP_TYPE_OPTIONS[params['ip_version']],
False)
append_match(rule, params['comment'], 'comment')
append_param(rule, params['comment'], '--comment', False)
return rule
def push_arguments(iptables_path, action, params, make_rule=True):
cmd = [iptables_path]
cmd.extend(['-t', params['table']])
cmd.extend([action, params['chain']])
if action == '-I' and params['rule_num']:
cmd.extend([params['rule_num']])
if make_rule:
cmd.extend(construct_rule(params))
return cmd
def check_rule_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-C', params)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
return (rc == 0)
def append_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-A', params)
module.run_command(cmd, check_rc=True)
def insert_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-I', params)
module.run_command(cmd, check_rc=True)
def remove_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-D', params)
module.run_command(cmd, check_rc=True)
def flush_table(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-F', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def set_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-P', params, make_rule=False)
cmd.append(params['policy'])
module.run_command(cmd, check_rc=True)
def get_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params, make_rule=False)
if module.params['numeric']:
cmd.append('--numeric')
rc, out, err = module.run_command(cmd, check_rc=True)
chain_header = out.split("\n")[0]
result = re.search(r'\(policy ([A-Z]+)\)', chain_header)
if result:
return result.group(1)
return None
def get_iptables_version(iptables_path, module):
cmd = [iptables_path, '--version']
rc, out, err = module.run_command(cmd, check_rc=True)
return out.split('v')[1].rstrip('\n')
def create_chain(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-N', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def check_chain_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params, make_rule=False)
if module.params['numeric']:
cmd.append('--numeric')
rc, out, err = module.run_command(cmd, check_rc=False)
return (rc == 0)
def delete_chain(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-X', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def main():
module = AnsibleModule(
supports_check_mode=True,
argument_spec=dict(
table=dict(type='str', default='filter', choices=['filter', 'nat', 'mangle', 'raw', 'security']),
state=dict(type='str', default='present', choices=['absent', 'present']),
action=dict(type='str', default='append', choices=['append', 'insert']),
ip_version=dict(type='str', default='ipv4', choices=['ipv4', 'ipv6']),
chain=dict(type='str'),
rule_num=dict(type='str'),
protocol=dict(type='str'),
wait=dict(type='str'),
source=dict(type='str'),
to_source=dict(type='str'),
destination=dict(type='str'),
to_destination=dict(type='str'),
match=dict(type='list', elements='str', default=[]),
tcp_flags=dict(type='dict',
options=dict(
flags=dict(type='list', elements='str'),
flags_set=dict(type='list', elements='str'))
),
jump=dict(type='str'),
gateway=dict(type='str'),
log_prefix=dict(type='str'),
log_level=dict(type='str',
choices=['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error',
'warning', 'notice', 'info', 'debug'],
default=None,
),
goto=dict(type='str'),
in_interface=dict(type='str'),
out_interface=dict(type='str'),
fragment=dict(type='str'),
set_counters=dict(type='str'),
source_port=dict(type='str'),
destination_port=dict(type='str'),
destination_ports=dict(type='list', elements='str', default=[]),
to_ports=dict(type='str'),
set_dscp_mark=dict(type='str'),
set_dscp_mark_class=dict(type='str'),
comment=dict(type='str'),
ctstate=dict(type='list', elements='str', default=[]),
src_range=dict(type='str'),
dst_range=dict(type='str'),
match_set=dict(type='str'),
match_set_flags=dict(type='str', choices=['src', 'dst', 'src,dst', 'dst,src']),
limit=dict(type='str'),
limit_burst=dict(type='str'),
uid_owner=dict(type='str'),
gid_owner=dict(type='str'),
reject_with=dict(type='str'),
icmp_type=dict(type='str'),
syn=dict(type='str', default='ignore', choices=['ignore', 'match', 'negate']),
flush=dict(type='bool', default=False),
policy=dict(type='str', choices=['ACCEPT', 'DROP', 'QUEUE', 'RETURN']),
chain_management=dict(type='bool', default=False),
numeric=dict(type='bool', default=False),
),
mutually_exclusive=(
['set_dscp_mark', 'set_dscp_mark_class'],
['flush', 'policy'],
),
required_if=[
['jump', 'TEE', ['gateway']],
['jump', 'tee', ['gateway']],
]
)
args = dict(
changed=False,
failed=False,
ip_version=module.params['ip_version'],
table=module.params['table'],
chain=module.params['chain'],
flush=module.params['flush'],
rule=' '.join(construct_rule(module.params)),
state=module.params['state'],
chain_management=module.params['chain_management'],
)
ip_version = module.params['ip_version']
iptables_path = module.get_bin_path(BINS[ip_version], True)
# Check if chain option is required
if args['flush'] is False and args['chain'] is None:
module.fail_json(msg="Either chain or flush parameter must be specified.")
if module.params.get('log_prefix', None) or module.params.get('log_level', None):
if module.params['jump'] is None:
module.params['jump'] = 'LOG'
elif module.params['jump'] != 'LOG':
module.fail_json(msg="Logging options can only be used with the LOG jump target.")
# Check if wait option is supported
iptables_version = LooseVersion(get_iptables_version(iptables_path, module))
if iptables_version >= LooseVersion(IPTABLES_WAIT_SUPPORT_ADDED):
if iptables_version < LooseVersion(IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED):
module.params['wait'] = ''
else:
module.params['wait'] = None
# Flush the table
if args['flush'] is True:
args['changed'] = True
if not module.check_mode:
flush_table(iptables_path, module, module.params)
# Set the policy
elif module.params['policy']:
current_policy = get_chain_policy(iptables_path, module, module.params)
if not current_policy:
module.fail_json(msg='Can\'t detect current policy')
changed = current_policy != module.params['policy']
args['changed'] = changed
if changed and not module.check_mode:
set_chain_policy(iptables_path, module, module.params)
# Delete the chain if there is no rule in the arguments
elif (args['state'] == 'absent') and not args['rule']:
chain_is_present = check_chain_present(
iptables_path, module, module.params
)
args['changed'] = chain_is_present
if (chain_is_present and args['chain_management'] and not module.check_mode):
delete_chain(iptables_path, module, module.params)
else:
# Create the chain if there are no rule arguments
if (args['state'] == 'present') and not args['rule']:
chain_is_present = check_chain_present(
iptables_path, module, module.params
)
args['changed'] = not chain_is_present
if (not chain_is_present and args['chain_management'] and not module.check_mode):
create_chain(iptables_path, module, module.params)
else:
insert = (module.params['action'] == 'insert')
rule_is_present = check_rule_present(
iptables_path, module, module.params
)
should_be_present = (args['state'] == 'present')
# Check if target is up to date
args['changed'] = (rule_is_present != should_be_present)
if args['changed'] is False:
# Target is already up to date
module.exit_json(**args)
# Modify if not check_mode
if not module.check_mode:
if should_be_present:
if insert:
insert_rule(iptables_path, module, module.params)
else:
append_rule(iptables_path, module, module.params)
else:
remove_rule(iptables_path, module, module.params)
module.exit_json(**args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,146 |
Confusing wording of default shell determination in ansible.builtin.user
|
### Summary
The documentation about how systems determine the default shell is confusing.
At the [shell parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/user_module.html#parameter-shell) is written: "See notes for details on how other operating systems determine the default shell by the underlying tool."
The [Notes section](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/user_module.html#notes) does not contain "details on how other operating systems determine the default shell".
If I read this, I would expect to find something along the lines of "On FreeBSD, the default shell is set by `dscl`".
Instead, it contains notes on which underlying tools are used by the module to create, modify and remove accounts. This is only helpful if the reader is already aware of the fact that these underlying tools _also_ set the default shell.
Furthermore, since the notes at the shell parameter explicitly spell out what the default shell on macOS is, the user has a reasonable expectation to find similarly worded notes for other operating systems in the Notes.
A possible solution: replace "See notes for details on how other operating systems determine the default shell by the underlying tool." with "On other operating systems, the default shell is determined by the underlying tool invoked by this module. See Notes for a per platform list of invoked tools."
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/user.py
### Ansible Version
```console
N/a
```
### Configuration
```console
N/a
```
### OS / Environment
N/a
### Additional Information
Previous bug report and attempted (but IMO inadequate) fix here: https://github.com/ansible/ansible/issues/59796.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82146
|
https://github.com/ansible/ansible/pull/82147
|
40baf5eace3848cd99b43a7c6732048c6072da60
|
d46b042a9475d177b2ebd69ff3d6f22f702ff323
| 2023-11-06T19:31:08Z |
python
| 2023-11-07T15:20:46Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import annotations
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
- On macOS, this defaults to the O(name) option.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be V(true) if the O(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
- On macOS, this defaults to V('staff')
type: str
groups:
description:
- A list of supplementary groups which the user is also a member of.
- By default, the user is removed from all other groups. Configure O(append) to modify this.
- When set to an empty string V(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If V(true), add the user to the groups specified in O(groups).
- If V(false), user will only be added to the groups specified in O(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was V(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is V(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires O(create_home) option!
type: str
version_added: "2.0"
password:
description:
- If provided, set the user's password to the provided encrypted hash (Linux) or plain text password (macOS).
- B(Linux/Unix/POSIX:) Enter the hashed password as the value.
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate the hash of a password.
- To create an account with a locked/disabled password on Linux systems, set this to V('!') or V('*').
- To create an account with a locked/disabled password on OpenBSD, set this to V('*************').
- B(OS X/macOS:) Enter the cleartext password as the value. Be sure to take relevant security precautions.
- On macOS, the password specified in the C(password) option will always be set, regardless of whether the user account already exists or not.
- When the password is passed as an argument, the C(user) module will always return changed to C(true) for macOS systems.
Since macOS no longer provides access to the hashed passwords directly.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
- See this L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#running-on-macos-as-a-target)
for additional requirements when removing users on macOS systems.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to V(false), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from O(createhome) to O(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to V(true) when used with O(home), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account O(state=present), setting this to V(true) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects O(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with O(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects O(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with O(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to V(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- V(always) will update passwords if they differ.
- V(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to V(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use O(profile='').
- Currently supported on Illumos/Solaris. Does nothing when used with other platforms.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use O(authorization='').
- Currently supported on Illumos/Solaris. Does nothing when used with other platforms.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Can set multiple roles using comma separation.
- To delete all roles, use O(role='').
- Currently supported on Illumos/Solaris. Does nothing when used with other platforms.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_warn:
description:
- Number of days of warning before password expires.
- Supported on Linux only.
type: int
version_added: "2.16"
umask:
description:
- Sets the umask of the user.
- Currently supported on Linux. Does nothing when used with other platforms.
- Requires O(local) is omitted or V(False).
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Create a user 'johnd' with a home directory
ansible.builtin.user:
name: johnd
create_home: yes
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
- name: Set number of warning days for password expiration
ansible.builtin.user:
name: jane157
password_expire_warn: 30
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When O(state) is V(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When O(state) is V(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When O(groups) is not empty and O(state) is V(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When O(state) is V(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When O(state) is V(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When O(state) is V(present) and O(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When O(state) is V(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When O(state) is V(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When O(generate_ssh_key) is V(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When O(generate_ssh_key) is V(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When O(generate_ssh_key) is V(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When O(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When O(uid) is passed to the module
type: int
sample: 1044
'''
import ctypes.util
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
import ansible.module_utils.compat.typing as t
class StructSpwdType(ctypes.Structure):
_fields_ = [
('sp_namp', ctypes.c_char_p),
('sp_pwdp', ctypes.c_char_p),
('sp_lstchg', ctypes.c_long),
('sp_min', ctypes.c_long),
('sp_max', ctypes.c_long),
('sp_warn', ctypes.c_long),
('sp_inact', ctypes.c_long),
('sp_expire', ctypes.c_long),
('sp_flag', ctypes.c_ulong),
]
try:
_LIBC = ctypes.cdll.LoadLibrary(
t.cast(
str,
ctypes.util.find_library('c')
)
)
_LIBC.getspnam.argtypes = (ctypes.c_char_p,)
_LIBC.getspnam.restype = ctypes.POINTER(StructSpwdType)
HAVE_SPWD = True
except AttributeError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
def getspnam(b_name):
return _LIBC.getspnam(b_name).contents
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.password_expire_warn = module.params['password_expire_warn']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
# yescrypt
if fields[1] == 'y' and len(fields[-1]) != 43:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if self.local:
# luseradd uses -n instead of -N
cmd.append('-n')
else:
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(ginfo[2])
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = self.user_password()[1] or '0'
current_expires = int(current_expires)
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True, names_only=False):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
group_names = set()
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
group_info = self.group_info(g)
if info and remove_existing and group_info[2] == info[3]:
groups.remove(g)
elif names_only:
group_names.add(group_info[0])
if names_only:
return group_names
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
warn_needs_change = self.password_expire_warn is not None
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
except ValueError:
return None, '', ''
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
warn_needs_change &= self.password_expire_warn != shadow_info.sp_warn
if not (min_needs_change or max_needs_change or warn_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
if warn_needs_change:
cmd.extend(["-W", self.password_expire_warn])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
passwd = to_native(shadow_info.sp_pwdp)
expires = shadow_info.sp_expire
return passwd, expires
except ValueError:
return passwd, expires
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton) and skeleton != os.devnull:
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = self.user_password()[1] or '0'
current_expires = int(current_expires)
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = self.get_groups_set(names_only=True)
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
# Make the Gecos (alias display name) default to username
if self.comment is None:
self.comment = self.name
# Make user group default to 'staff'
if self.group is None:
self.group = 'staff'
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
password_expire_warn=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.