status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
password_expire_warn=dict(type='int', no_log=False),
hidden=dict(type='bool'),
seuser=dict(type='str'),
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,267 |
`ansible.builtin.user`: Removing an already absent local user fails or produces a huge warning
|
### Summary
When I try to ensure, that users are removed from a system, the task succeeds the first time and the next time, it fails as it can not remove the non-existing user from the `/etc/passwd` file.
This issue could be potentially solved by adding the argument `local: true` at the task, but this results in a huge warning message for every user, which should get removed and does already not exist anymore: https://github.com/ansible/ansible/blob/ad9867ca5eb8ba27f827d5d5a7999cfb96ae0986/lib/ansible/modules/user.py#L1055-L1059
So either this behaviour is buggy when using `local: false` or the warning from `local: true` should get removed (or only printed when debug is enabled).
### Issue Type
Bug Report
### Component Name
ansible.builtin.user
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /home/skraetzig/Git/infrastructure/ansible.cfg
configured module search path = ['/home/skraetzig/Git/infrastructure/ansible/library']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /usr/share/ansible/third-party/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_NOCOWS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANSIBLE_PIPELINING(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ANY_ERRORS_FATAL(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
COLLECTIONS_PATHS(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/usr/share/ansible/third-party/collections']
CONFIG_FILE() = /home/skraetzig/Git/infrastructure/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/filter_plugins']
DEFAULT_FORKS(/home/skraetzig/Git/infrastructure/ansible.cfg) = 50
DEFAULT_LOCAL_TMP(env: ANSIBLE_LOCAL_TEMP) = /tmp/ansible-local-35zs1vlt9t
DEFAULT_MODULE_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/library']
DEFAULT_REMOTE_USER(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
DEFAULT_ROLES_PATH(/home/skraetzig/Git/infrastructure/ansible.cfg) = ['/home/skraetzig/Git/infrastructure/ansible/roles', '/home/skraetzig/Git/infrastructure/ansible/actions', '/hom>
DIFF_ALWAYS(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
INTERPRETER_PYTHON(/home/skraetzig/Git/infrastructure/ansible.cfg) = /usr/bin/python3
MAX_FILE_SIZE_FOR_DIFF(/home/skraetzig/Git/infrastructure/ansible.cfg) = 1044480
RETRY_FILES_ENABLED(/home/skraetzig/Git/infrastructure/ansible.cfg) = False
CALLBACK:
========
default:
_______
display_ok_hosts(env: ANSIBLE_DISPLAY_OK_HOSTS) = True
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CONNECTION:
==========
local:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
paramiko_ssh:
____________
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
psrp:
____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
ssh:
___
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
remote_user(/home/skraetzig/Git/infrastructure/ansible.cfg) = deploy
ssh_args(env: ANSIBLE_SSH_ARGS) = -C -o ControlMaster=auto -o ControlPersist=60s
winrm:
_____
pipelining(/home/skraetzig/Git/infrastructure/ansible.cfg) = True
```
### OS / Environment
Debian 10 (Buster) and 11 (Bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: remove users
ansible.builtin.user:
name: "{{ item }}"
state: absent
remove: true
with_items:
- user1
- user2
- user3
```
### Expected Results
The listed users `user1`, `user2`, `user3` get successfully removed from the system, if they exist and if not, the task should be successfull without any warning.
### Actual Results
The first rollout works as expected. The users get successfully removed.
All other rollouts afterwards are then failing:
```console
TASK [users : remove users] ****************************************************
failed: [debian] (item=user1) => {"ansible_loop_var": "item", "changed": false, "item": "user1", "msg": "userdel: cannot remove entry 'user1' from /etc/passwd\n", "name": "user1", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80267
|
https://github.com/ansible/ansible/pull/80291
|
d664f13b4a117b324f107b603e9b8e2bb9af50c5
|
e0bf76e3db3e007d039a0086276d35c28b90ff04
| 2023-03-21T20:34:12Z |
python
| 2023-11-23T14:25:35Z |
lib/ansible/modules/user.py
|
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
if user.sshkeygen:
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,179 |
validate-modules does not catch all argument vs docs mismatches, specifically the choices field
|
### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82179
|
https://github.com/ansible/ansible/pull/82266
|
0806da55b13cbec202a6e8581340ce96f8c93ea5
|
e6e19e37f729e89060fdf313c24b91f2f1426bd3
| 2023-11-09T10:13:39Z |
python
| 2023-11-28T15:09:29Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py
|
#
#
#
#
from __future__ import annotations
import ast
import datetime
import os
import re
import sys
from io import BytesIO, TextIOWrapper
import yaml
import yaml.reader
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.yaml import SafeLoader
from ansible.module_utils.six import string_types
from ansible.parsing.yaml.loader import AnsibleLoader
class AnsibleTextIOWrapper(TextIOWrapper):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,179 |
validate-modules does not catch all argument vs docs mismatches, specifically the choices field
|
### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82179
|
https://github.com/ansible/ansible/pull/82266
|
0806da55b13cbec202a6e8581340ce96f8c93ea5
|
e6e19e37f729e89060fdf313c24b91f2f1426bd3
| 2023-11-09T10:13:39Z |
python
| 2023-11-28T15:09:29Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py
|
def write(self, s):
super(AnsibleTextIOWrapper, self).write(to_text(s, self.encoding, errors='replace'))
def find_executable(executable, cwd=None, path=None):
"""Finds the full path to the executable specified"""
match = None
real_cwd = os.getcwd()
if not cwd:
cwd = real_cwd
if os.path.dirname(executable):
target = os.path.join(cwd, executable)
if os.path.exists(target) and os.access(target, os.F_OK | os.X_OK):
match = executable
else:
path = os.environ.get('PATH', os.path.defpath)
path_dirs = path.split(os.path.pathsep)
seen_dirs = set()
for path_dir in path_dirs:
if path_dir in seen_dirs:
continue
seen_dirs.add(path_dir)
if os.path.abspath(path_dir) == real_cwd:
path_dir = cwd
candidate = os.path.join(path_dir, executable)
if os.path.exists(candidate) and os.access(candidate, os.F_OK | os.X_OK):
match = candidate
break
return match
def find_globals(g, tree):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,179 |
validate-modules does not catch all argument vs docs mismatches, specifically the choices field
|
### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82179
|
https://github.com/ansible/ansible/pull/82266
|
0806da55b13cbec202a6e8581340ce96f8c93ea5
|
e6e19e37f729e89060fdf313c24b91f2f1426bd3
| 2023-11-09T10:13:39Z |
python
| 2023-11-28T15:09:29Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py
|
"""Uses AST to find globals in an ast tree"""
for child in tree:
if hasattr(child, 'body') and isinstance(child.body, list):
find_globals(g, child.body)
elif isinstance(child, (ast.FunctionDef, ast.ClassDef)):
g.add(child.name)
continue
elif isinstance(child, ast.Assign):
try:
g.add(child.targets[0].id)
except (IndexError, AttributeError):
pass
elif isinstance(child, ast.Import):
g.add(child.names[0].name)
elif isinstance(child, ast.ImportFrom):
for name in child.names:
g_name = name.asname or name.name
if g_name == '*':
continue
g.add(g_name)
class CaptureStd():
"""Context manager to handle capturing stderr and stdout"""
def __enter__(self):
self.sys_stdout = sys.stdout
self.sys_stderr = sys.stderr
sys.stdout = self.stdout = AnsibleTextIOWrapper(BytesIO(), encoding=self.sys_stdout.encoding)
sys.stderr = self.stderr = AnsibleTextIOWrapper(BytesIO(), encoding=self.sys_stderr.encoding)
return self
def __exit__(self, exc_type, exc_value, traceback):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,179 |
validate-modules does not catch all argument vs docs mismatches, specifically the choices field
|
### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82179
|
https://github.com/ansible/ansible/pull/82266
|
0806da55b13cbec202a6e8581340ce96f8c93ea5
|
e6e19e37f729e89060fdf313c24b91f2f1426bd3
| 2023-11-09T10:13:39Z |
python
| 2023-11-28T15:09:29Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py
|
sys.stdout = self.sys_stdout
sys.stderr = self.sys_stderr
def get(self):
"""Return ``(stdout, stderr)``"""
return self.stdout.buffer.getvalue(), self.stderr.buffer.getvalue()
def get_module_name_from_filename(filename, collection):
if collection:
path = os.path.join(collection, filename)
else:
path = os.path.relpath(filename, 'lib')
name = os.path.splitext(path)[0].replace(os.path.sep, '.')
return name
def parse_yaml(value, lineno, module, name, load_all=False, ansible_loader=False):
traces = []
errors = []
data = None
if load_all:
yaml_load = yaml.load_all
else:
yaml_load = yaml.load
if ansible_loader:
loader = AnsibleLoader
else:
loader = SafeLoader
try:
data = yaml_load(value, Loader=loader)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,179 |
validate-modules does not catch all argument vs docs mismatches, specifically the choices field
|
### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82179
|
https://github.com/ansible/ansible/pull/82266
|
0806da55b13cbec202a6e8581340ce96f8c93ea5
|
e6e19e37f729e89060fdf313c24b91f2f1426bd3
| 2023-11-09T10:13:39Z |
python
| 2023-11-28T15:09:29Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py
|
if load_all:
data = list(data)
except yaml.MarkedYAMLError as e:
errors.append({
'msg': '%s is not valid YAML' % name,
'line': e.problem_mark.line + lineno,
'column': e.problem_mark.column + 1
})
traces.append(e)
except yaml.reader.ReaderError as e:
traces.append(e)
errors.append({
'msg': ('%s is not valid YAML. Character '
'0x%x at position %d.' % (name, e.character, e.position)),
'line': lineno
})
except yaml.YAMLError as e:
traces.append(e)
errors.append({
'msg': '%s is not valid YAML: %s: %s' % (name, type(e), e),
'line': lineno
})
return data, errors, traces
def is_empty(value):
"""Evaluate null like values excluding False"""
if value is False:
return False
return not bool(value)
def compare_unordered_lists(a, b):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,179 |
validate-modules does not catch all argument vs docs mismatches, specifically the choices field
|
### Summary
At sanity check, if the choices option for a parameter is defined to be the same length as the choices in the document, and the choices for the parameter are repeated. 'ansible-test sanity --test validate-modules ****.py" Can't be checked, It will pass!
sample as:
```e.g.
docuemnt define as:
caching:
description:
- Type of ***** caching.
type: str
choices:
- ReadOnly
- ReadWrite
argument define as: caching=dict(type='str', choices=['ReadOnly', 'ReadOnly'])
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/fred/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/fred/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
null
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --test validate-modules ***.py
```
### Expected Results
Can't pass!
### Actual Results
```console
Check pass!
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82179
|
https://github.com/ansible/ansible/pull/82266
|
0806da55b13cbec202a6e8581340ce96f8c93ea5
|
e6e19e37f729e89060fdf313c24b91f2f1426bd3
| 2023-11-09T10:13:39Z |
python
| 2023-11-28T15:09:29Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py
|
"""Safe list comparisons
Supports:
- unordered lists
- unhashable elements
"""
return len(a) == len(b) and all(x in b for x in a)
class NoArgsAnsibleModule(AnsibleModule):
"""AnsibleModule that does not actually load params. This is used to get access to the
methods within AnsibleModule without having to fake a bunch of data
"""
def _load_params(self):
self.params = {'_ansible_selinux_special_fs': [], '_ansible_remote_tmp': '/tmp', '_ansible_keep_remote_files': False, '_ansible_check_mode': False}
def parse_isodate(v, allow_date):
if allow_date:
if isinstance(v, datetime.date):
return v
msg = 'Expected ISO 8601 date string (YYYY-MM-DD) or YAML date'
else:
msg = 'Expected ISO 8601 date string (YYYY-MM-DD)'
if not isinstance(v, string_types):
raise ValueError(msg)
if not re.match('^[0-9]{4}-[0-9]{2}-[0-9]{2}$', v):
raise ValueError(msg)
try:
return datetime.datetime.strptime(v, '%Y-%m-%d').date()
except ValueError:
raise ValueError(msg)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
#
#
#
from __future__ import annotations
import collections
import errno
import glob
import json
import os
import re
import sys
import time
from multiprocessing import cpu_count
from multiprocessing.pool import ThreadPool
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines, get_mount_size
from ansible.module_utils.six import iteritems
from ansible.module_utils.facts import timeout
def get_partition_uuid(partname):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
try:
uuids = os.listdir("/dev/disk/by-uuid")
except OSError:
return
for uuid in uuids:
dev = os.path.realpath("/dev/disk/by-uuid/" + uuid)
if dev == ("/dev/" + partname):
return uuid
return None
class LinuxHardware(Hardware):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
"""
Linux-specific subclass of Hardware. Defines memory and CPU facts:
- memfree_mb
- memtotal_mb
- swapfree_mb
- swaptotal_mb
- processor (a list)
- processor_cores
- processor_count
In addition, it also defines number of DMI facts and device facts.
"""
platform = 'Linux'
ORIGINAL_MEMORY_FACTS = frozenset(('MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'))
MEMORY_FACTS = ORIGINAL_MEMORY_FACTS.union(('Buffers', 'Cached', 'SwapCached'))
BIND_MOUNT_RE = re.compile(r'.*\]')
MTAB_BIND_MOUNT_RE = re.compile(r'.*bind.*"')
OCTAL_ESCAPE_RE = re.compile(r'\\[0-9]{3}')
def populate(self, collected_facts=None):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
hardware_facts = {}
locale = get_best_parsable_locale(self.module)
self.module.run_command_environ_update = {'LANG': locale, 'LC_ALL': locale, 'LC_NUMERIC': locale}
cpu_facts = self.get_cpu_facts(collected_facts=collected_facts)
memory_facts = self.get_memory_facts()
dmi_facts = self.get_dmi_facts()
device_facts = self.get_device_facts()
uptime_facts = self.get_uptime_facts()
lvm_facts = self.get_lvm_facts()
mount_facts = {}
try:
mount_facts = self.get_mount_facts()
except timeout.TimeoutError:
self.module.warn("No mount facts were gathered due to timeout.")
hardware_facts.update(cpu_facts)
hardware_facts.update(memory_facts)
hardware_facts.update(dmi_facts)
hardware_facts.update(device_facts)
hardware_facts.update(uptime_facts)
hardware_facts.update(lvm_facts)
hardware_facts.update(mount_facts)
return hardware_facts
def get_memory_facts(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
memory_facts = {}
if not os.access("/proc/meminfo", os.R_OK):
return memory_facts
memstats = {}
for line in get_file_lines("/proc/meminfo"):
data = line.split(":", 1)
key = data[0]
if key in self.ORIGINAL_MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memory_facts["%s_mb" % key.lower()] = int(val) // 1024
if key in self.MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memstats[key.lower()] = int(val) // 1024
if None not in (memstats.get('memtotal'), memstats.get('memfree')):
memstats['real:used'] = memstats['memtotal'] - memstats['memfree']
if None not in (memstats.get('cached'), memstats.get('memfree'), memstats.get('buffers')):
memstats['nocache:free'] = memstats['cached'] + memstats['memfree'] + memstats['buffers']
if None not in (memstats.get('memtotal'), memstats.get('nocache:free')):
memstats['nocache:used'] = memstats['memtotal'] - memstats['nocache:free']
if None not in (memstats.get('swaptotal'), memstats.get('swapfree')):
memstats['swap:used'] = memstats['swaptotal'] - memstats['swapfree']
memory_facts['memory_mb'] = {
'real': {
'total': memstats.get('memtotal'),
'used': memstats.get('real:used'),
'free': memstats.get('memfree'),
},
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
'nocache': {
'free': memstats.get('nocache:free'),
'used': memstats.get('nocache:used'),
},
'swap': {
'total': memstats.get('swaptotal'),
'free': memstats.get('swapfree'),
'used': memstats.get('swap:used'),
'cached': memstats.get('swapcached'),
},
}
return memory_facts
def get_cpu_facts(self, collected_facts=None):
cpu_facts = {}
collected_facts = collected_facts or {}
i = 0
vendor_id_occurrence = 0
model_name_occurrence = 0
processor_occurrence = 0
physid = 0
coreid = 0
sockets = {}
cores = {}
zp = 0
zmt = 0
xen = False
xen_paravirt = False
try:
if os.path.exists('/proc/xen'):
xen = True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
else:
for line in get_file_lines('/sys/hypervisor/type'):
if line.strip() == 'xen':
xen = True
break
except IOError:
pass
if not os.access("/proc/cpuinfo", os.R_OK):
return cpu_facts
cpu_facts['processor'] = []
for line in get_file_lines('/proc/cpuinfo'):
data = line.split(":", 1)
key = data[0].strip()
try:
val = data[1].strip()
except IndexError:
val = ""
if xen:
if key == 'flags':
if 'vme' not in val:
xen_paravirt = True
if key in ['model name', 'Processor', 'vendor_id', 'cpu', 'Vendor', 'processor']:
if 'processor' not in cpu_facts:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
cpu_facts['processor'] = []
cpu_facts['processor'].append(val)
if key == 'vendor_id':
vendor_id_occurrence += 1
if key == 'model name':
model_name_occurrence += 1
if key == 'processor':
processor_occurrence += 1
i += 1
elif key == 'physical id':
physid = val
if physid not in sockets:
sockets[physid] = 1
elif key == 'core id':
coreid = val
if coreid not in sockets:
cores[coreid] = 1
elif key == 'cpu cores':
sockets[physid] = int(val)
elif key == 'siblings':
cores[coreid] = int(val)
elif key == '# processors':
zp = int(val)
elif key == 'max thread id':
zmt = int(val) + 1
elif key == 'ncpus active':
i = int(val)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
if vendor_id_occurrence > 0:
if vendor_id_occurrence == model_name_occurrence:
i = vendor_id_occurrence
if collected_facts.get('ansible_architecture', '').startswith(('armv', 'aarch', 'ppc')):
i = processor_occurrence
if collected_facts.get('ansible_architecture') == 's390x':
cpu_facts['processor_count'] = 1
cpu_facts['processor_cores'] = zp // zmt
cpu_facts['processor_threads_per_core'] = zmt
cpu_facts['processor_vcpus'] = zp
cpu_facts['processor_nproc'] = zp
else:
if xen_paravirt:
cpu_facts['processor_count'] = i
cpu_facts['processor_cores'] = i
cpu_facts['processor_threads_per_core'] = 1
cpu_facts['processor_vcpus'] = i
cpu_facts['processor_nproc'] = i
else:
if sockets:
cpu_facts['processor_count'] = len(sockets)
else:
cpu_facts['processor_count'] = i
socket_values = list(sockets.values())
if socket_values and socket_values[0]:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
cpu_facts['processor_cores'] = socket_values[0]
else:
cpu_facts['processor_cores'] = 1
core_values = list(cores.values())
if core_values:
cpu_facts['processor_threads_per_core'] = core_values[0] // cpu_facts['processor_cores']
else:
cpu_facts['processor_threads_per_core'] = 1 // cpu_facts['processor_cores']
cpu_facts['processor_vcpus'] = (cpu_facts['processor_threads_per_core'] *
cpu_facts['processor_count'] * cpu_facts['processor_cores'])
cpu_facts['processor_nproc'] = processor_occurrence
try:
cpu_facts['processor_nproc'] = len(
os.sched_getaffinity(0)
)
except AttributeError:
try:
cmd = get_bin_path('nproc')
except ValueError:
pass
else:
rc, out, _err = self.module.run_command(cmd)
if rc == 0:
cpu_facts['processor_nproc'] = int(out)
return cpu_facts
def get_dmi_facts(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
''' learn dmi facts from system
Try /sys first for dmi related facts.
If that is not available, fall back to dmidecode executable '''
dmi_facts = {}
if os.path.exists('/sys/devices/virtual/dmi/id/product_name'):
FORM_FACTOR = ["Unknown", "Other", "Unknown", "Desktop",
"Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower",
"Portable", "Laptop", "Notebook", "Hand Held", "Docking Station",
"All In One", "Sub Notebook", "Space-saving", "Lunch Box",
"Main Server Chassis", "Expansion Chassis", "Sub Chassis",
"Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis",
"Rack Mount Chassis", "Sealed-case PC", "Multi-system",
"CompactPCI", "AdvancedTCA", "Blade", "Blade Enclosure",
"Tablet", "Convertible", "Detachable", "IoT Gateway",
"Embedded PC", "Mini PC", "Stick PC"]
DMI_DICT = {
'bios_date': '/sys/devices/virtual/dmi/id/bios_date',
'bios_vendor': '/sys/devices/virtual/dmi/id/bios_vendor',
'bios_version': '/sys/devices/virtual/dmi/id/bios_version',
'board_asset_tag': '/sys/devices/virtual/dmi/id/board_asset_tag',
'board_name': '/sys/devices/virtual/dmi/id/board_name',
'board_serial': '/sys/devices/virtual/dmi/id/board_serial',
'board_vendor': '/sys/devices/virtual/dmi/id/board_vendor',
'board_version': '/sys/devices/virtual/dmi/id/board_version',
'chassis_asset_tag': '/sys/devices/virtual/dmi/id/chassis_asset_tag',
'chassis_serial': '/sys/devices/virtual/dmi/id/chassis_serial',
'chassis_vendor': '/sys/devices/virtual/dmi/id/chassis_vendor',
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
'chassis_version': '/sys/devices/virtual/dmi/id/chassis_version',
'form_factor': '/sys/devices/virtual/dmi/id/chassis_type',
'product_name': '/sys/devices/virtual/dmi/id/product_name',
'product_serial': '/sys/devices/virtual/dmi/id/product_serial',
'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid',
'product_version': '/sys/devices/virtual/dmi/id/product_version',
'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor',
}
for (key, path) in DMI_DICT.items():
data = get_file_content(path)
if data is not None:
if key == 'form_factor':
try:
dmi_facts['form_factor'] = FORM_FACTOR[int(data)]
except IndexError:
dmi_facts['form_factor'] = 'unknown (%s)' % data
else:
dmi_facts[key] = data
else:
dmi_facts[key] = 'NA'
else:
dmi_bin = self.module.get_bin_path('dmidecode')
DMI_DICT = {
'bios_date': 'bios-release-date',
'bios_vendor': 'bios-vendor',
'bios_version': 'bios-version',
'board_asset_tag': 'baseboard-asset-tag',
'board_name': 'baseboard-product-name',
'board_serial': 'baseboard-serial-number',
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
'board_vendor': 'baseboard-manufacturer',
'board_version': 'baseboard-version',
'chassis_asset_tag': 'chassis-asset-tag',
'chassis_serial': 'chassis-serial-number',
'chassis_vendor': 'chassis-manufacturer',
'chassis_version': 'chassis-version',
'form_factor': 'chassis-type',
'product_name': 'system-product-name',
'product_serial': 'system-serial-number',
'product_uuid': 'system-uuid',
'product_version': 'system-version',
'system_vendor': 'system-manufacturer',
}
for (k, v) in DMI_DICT.items():
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s %s' % (dmi_bin, v))
if rc == 0:
thisvalue = ''.join([line for line in out.splitlines() if not line.startswith('#')])
try:
json.dumps(thisvalue)
except UnicodeDecodeError:
thisvalue = "NA"
dmi_facts[k] = thisvalue
else:
dmi_facts[k] = 'NA'
else:
dmi_facts[k] = 'NA'
return dmi_facts
def _run_lsblk(self, lsblk_path):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
args = ['--list', '--noheadings', '--paths', '--output', 'NAME,UUID', '--exclude', '2']
cmd = [lsblk_path] + args
rc, out, err = self.module.run_command(cmd)
return rc, out, err
def _lsblk_uuid(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
uuids = {}
lsblk_path = self.module.get_bin_path("lsblk")
if not lsblk_path:
return uuids
rc, out, err = self._run_lsblk(lsblk_path)
if rc != 0:
return uuids
for lsblk_line in out.splitlines():
if not lsblk_line:
continue
line = lsblk_line.strip()
fields = line.rsplit(None, 1)
if len(fields) < 2:
continue
device_name, uuid = fields[0].strip(), fields[1].strip()
if device_name in uuids:
continue
uuids[device_name] = uuid
return uuids
def _udevadm_uuid(self, device):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
uuid = 'N/A'
udevadm_path = self.module.get_bin_path('udevadm')
if not udevadm_path:
return uuid
cmd = [udevadm_path, 'info', '--query', 'property', '--name', device]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
return uuid
m = re.search('ID_FS_UUID=(.*)\n', out)
if m:
uuid = m.group(1)
return uuid
def _run_findmnt(self, findmnt_path):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
args = ['--list', '--noheadings', '--notruncate']
cmd = [findmnt_path] + args
rc, out, err = self.module.run_command(cmd, errors='surrogate_then_replace')
return rc, out, err
def _find_bind_mounts(self):
bind_mounts = set()
findmnt_path = self.module.get_bin_path("findmnt")
if not findmnt_path:
return bind_mounts
rc, out, err = self._run_findmnt(findmnt_path)
if rc != 0:
return bind_mounts
for line in out.splitlines():
fields = line.split()
if len(fields) < 2:
continue
if self.BIND_MOUNT_RE.match(fields[1]):
bind_mounts.add(fields[0])
return bind_mounts
def _mtab_entries(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
mtab_file = '/etc/mtab'
if not os.path.exists(mtab_file):
mtab_file = '/proc/mounts'
mtab = get_file_content(mtab_file, '')
mtab_entries = []
for line in mtab.splitlines():
fields = line.split()
if len(fields) < 4:
continue
mtab_entries.append(fields)
return mtab_entries
@staticmethod
def _replace_octal_escapes_helper(match):
return chr(int(match.group()[1:], 8))
def _replace_octal_escapes(self, value):
return self.OCTAL_ESCAPE_RE.sub(self._replace_octal_escapes_helper, value)
def get_mount_info(self, mount, device, uuids):
mount_size = get_mount_size(mount)
uuid = uuids.get(device, self._udevadm_uuid(device))
return mount_size, uuid
def get_mount_facts(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
mounts = []
bind_mounts = self._find_bind_mounts()
uuids = self._lsblk_uuid()
mtab_entries = self._mtab_entries()
results = {}
pool = ThreadPool(processes=min(len(mtab_entries), cpu_count()))
maxtime = timeout.GATHER_TIMEOUT or timeout.DEFAULT_GATHER_TIMEOUT
for fields in mtab_entries:
fields = [self._replace_octal_escapes(field) for field in fields]
device, mount, fstype, options = fields[0], fields[1], fields[2], fields[3]
dump, passno = int(fields[4]), int(fields[5])
if not device.startswith(('/', '\\')) and ':/' not in device or fstype == 'none':
continue
mount_info = {'mount': mount,
'device': device,
'fstype': fstype,
'options': options,
'dump': dump,
'passno': passno}
if mount in bind_mounts:
if not self.MTAB_BIND_MOUNT_RE.match(options):
mount_info['options'] += ",bind"
results[mount] = {'info': mount_info,
'extra': pool.apply_async(self.get_mount_info, (mount, device, uuids)),
'timelimit': time.time() + maxtime}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
pool.close()
while results:
for mount in list(results):
done = False
res = results[mount]['extra']
try:
if res.ready():
done = True
if res.successful():
mount_size, uuid = res.get()
if mount_size:
results[mount]['info'].update(mount_size)
results[mount]['info']['uuid'] = uuid or 'N/A'
else:
results[mount]['info']['note'] = 'Could not get extra information: %s.' % (to_text(res.get()))
elif time.time() > results[mount]['timelimit']:
done = True
self.module.warn("Timeout exceeded when getting mount info for %s" % mount)
results[mount]['info']['note'] = 'Could not get extra information due to timeout'
except Exception as e:
import traceback
done = True
results[mount]['info'] = 'N/A'
self.module.warn("Error prevented getting extra info for mount %s: [%s] %s." % (mount, type(e), to_text(e)))
self.module.debug(traceback.format_exc())
if done:
mounts.append(results[mount]['info'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
del results[mount]
time.sleep(0.1)
return {'mounts': mounts}
def get_device_links(self, link_dir):
if not os.path.exists(link_dir):
return {}
try:
retval = collections.defaultdict(set)
for entry in os.listdir(link_dir):
try:
target = os.path.basename(os.readlink(os.path.join(link_dir, entry)))
retval[target].add(entry)
except OSError:
continue
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_owners(self):
try:
retval = collections.defaultdict(set)
for path in glob.glob('/sys/block/*/slaves/*'):
elements = path.split('/')
device = elements[3]
target = elements[5]
retval[target].add(device)
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_links(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
return {
'ids': self.get_device_links('/dev/disk/by-id'),
'uuids': self.get_device_links('/dev/disk/by-uuid'),
'labels': self.get_device_links('/dev/disk/by-label'),
'masters': self.get_all_device_owners(),
}
def get_holders(self, block_dev_dict, sysdir):
block_dev_dict['holders'] = []
if os.path.isdir(sysdir + "/holders"):
for folder in os.listdir(sysdir + "/holders"):
if not folder.startswith("dm-"):
continue
name = get_file_content(sysdir + "/holders/" + folder + "/dm/name")
if name:
block_dev_dict['holders'].append(name)
else:
block_dev_dict['holders'].append(folder)
def _get_sg_inq_serial(self, sg_inq, block):
device = "/dev/%s" % (block)
rc, drivedata, err = self.module.run_command([sg_inq, device])
if rc == 0:
serial = re.search(r"(?:Unit serial|Serial) number:\s+(\w+)", drivedata)
if serial:
return serial.group(1)
def get_device_facts(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
device_facts = {}
device_facts['devices'] = {}
lspci = self.module.get_bin_path('lspci')
if lspci:
rc, pcidata, err = self.module.run_command([lspci, '-D'], errors='surrogate_then_replace')
else:
pcidata = None
try:
block_devs = os.listdir("/sys/block")
except OSError:
return device_facts
devs_wwn = {}
try:
devs_by_id = os.listdir("/dev/disk/by-id")
except OSError:
pass
else:
for link_name in devs_by_id:
if link_name.startswith("wwn-"):
try:
wwn_link = os.readlink(os.path.join("/dev/disk/by-id", link_name))
except OSError:
continue
devs_wwn[os.path.basename(wwn_link)] = link_name[4:]
links = self.get_all_device_links()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
device_facts['device_links'] = links
for block in block_devs:
virtual = 1
sysfs_no_links = 0
try:
path = os.readlink(os.path.join("/sys/block/", block))
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.EINVAL:
path = block
sysfs_no_links = 1
else:
continue
sysdir = os.path.join("/sys/block", path)
if sysfs_no_links == 1:
for folder in os.listdir(sysdir):
if "device" in folder:
virtual = 0
break
d = {}
d['virtual'] = virtual
d['links'] = {}
for (link_type, link_values) in iteritems(links):
d['links'][link_type] = link_values.get(block, [])
diskname = os.path.basename(sysdir)
for key in ['vendor', 'model', 'sas_address', 'sas_device_handle']:
d[key] = get_file_content(sysdir + "/device/" + key)
sg_inq = self.module.get_bin_path('sg_inq')
serial_path = "/sys/block/%s/device/serial" % (block)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
if sg_inq:
serial = self._get_sg_inq_serial(sg_inq, block)
if serial:
d['serial'] = serial
else:
serial = get_file_content(serial_path)
if serial:
d['serial'] = serial
for key, test in [('removable', '/removable'),
('support_discard', '/queue/discard_granularity'),
]:
d[key] = get_file_content(sysdir + test)
if diskname in devs_wwn:
d['wwn'] = devs_wwn[diskname]
d['partitions'] = {}
for folder in os.listdir(sysdir):
m = re.search("(" + diskname + r"[p]?\d+)", folder)
if m:
part = {}
partname = m.group(1)
part_sysdir = sysdir + "/" + partname
part['links'] = {}
for (link_type, link_values) in iteritems(links):
part['links'][link_type] = link_values.get(partname, [])
part['start'] = get_file_content(part_sysdir + "/start", 0)
part['sectors'] = get_file_content(part_sysdir + "/size", 0)
part['sectorsize'] = get_file_content(part_sysdir + "/queue/logical_block_size")
if not part['sectorsize']:
part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size", 512)
part['size'] = bytes_to_human((float(part['sectors']) * 512.0))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
part['uuid'] = get_partition_uuid(partname)
self.get_holders(part, part_sysdir)
d['partitions'][partname] = part
d['rotational'] = get_file_content(sysdir + "/queue/rotational")
d['scheduler_mode'] = ""
scheduler = get_file_content(sysdir + "/queue/scheduler")
if scheduler is not None:
m = re.match(r".*?(\[(.*)\])", scheduler)
if m:
d['scheduler_mode'] = m.group(2)
d['sectors'] = get_file_content(sysdir + "/size")
if not d['sectors']:
d['sectors'] = 0
d['sectorsize'] = get_file_content(sysdir + "/queue/logical_block_size")
if not d['sectorsize']:
d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size", 512)
d['size'] = bytes_to_human(float(d['sectors']) * 512.0)
d['host'] = ""
m = re.match(r".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir)
if m and pcidata:
pciid = m.group(1)
did = re.escape(pciid)
m = re.search("^" + did + r"\s(.*)$", pcidata, re.MULTILINE)
if m:
d['host'] = m.group(1)
self.get_holders(d, sysdir)
device_facts['devices'][diskname] = d
return device_facts
def get_uptime_facts(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
uptime_facts = {}
uptime_file_content = get_file_content('/proc/uptime')
if uptime_file_content:
uptime_seconds_string = uptime_file_content.split(' ')[0]
uptime_facts['uptime_seconds'] = int(float(uptime_seconds_string))
return uptime_facts
def _find_mapper_device_name(self, dm_device):
dm_prefix = '/dev/dm-'
mapper_device = dm_device
if dm_device.startswith(dm_prefix):
dmsetup_cmd = self.module.get_bin_path('dmsetup', True)
mapper_prefix = '/dev/mapper/'
rc, dm_name, err = self.module.run_command("%s info -C --noheadings -o name %s" % (dmsetup_cmd, dm_device))
if rc == 0:
mapper_device = mapper_prefix + dm_name.rstrip()
return mapper_device
def get_lvm_facts(self):
""" Get LVM Facts if running as root and lvm utils are available """
lvm_facts = {'lvm': 'N/A'}
if os.getuid() == 0 and self.module.get_bin_path('vgs'):
lvm_util_options = '--noheadings --nosuffix --units g --separator ,'
vgs_path = self.module.get_bin_path('vgs')
vgs = {}
if vgs_path:
rc, vg_lines, err = self.module.run_command('%s %s' % (vgs_path, lvm_util_options))
for vg_line in vg_lines.splitlines():
items = vg_line.strip().split(',')
vgs[items[0]] = {'size_g': items[-2],
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 82,244 |
ansible_processor_threads_per_core Ansible Facts incorrect against AMD Genoa based systems
|
### Summary
ansible_processor_threads_per_core shows incorrect information against AMD Genoa (AMD EPYC 9654P 96-Core Processor) based hosts.
Issue Description :
ansible_processor_threads_per_core returns 1 instead of 2, on a host where HT is enabled. command output of lscpu shows the right information.
Setup module output:
```
"ansible_processor_threads_per_core": 1,
```
lscpu output:
```
Thread(s) per core: 2
```
Ansible Versions:
```
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.6.8
jinja version = 2.11.3
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Nothing returned
```
### OS / Environment
CentOS (CentOS Linux release 7.9.2009 (Core))
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
ansible -i /tmp/inv all -m setup -a 'gather_subset=!all,!any,virtual,network,hardware'
### Expected Results
Expected : "ansible_processor_threads_per_core": 2,
Getting: ""ansible_processor_threads_per_core": 1,"
### Actual Results
```console
Details given as above
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/82244
|
https://github.com/ansible/ansible/pull/82261
|
fd2d0ecfb7d2fbadcfd41690aeb56067c8a04f82
|
e80507af32fad1ccaa62f8e6630f9095fe253004
| 2023-11-20T06:30:45Z |
python
| 2023-11-28T15:49:52Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
'free_g': items[-1],
'num_lvs': items[2],
'num_pvs': items[1]}
lvs_path = self.module.get_bin_path('lvs')
lvs = {}
if lvs_path:
rc, lv_lines, err = self.module.run_command('%s %s' % (lvs_path, lvm_util_options))
for lv_line in lv_lines.splitlines():
items = lv_line.strip().split(',')
lvs[items[0]] = {'size_g': items[3], 'vg': items[1]}
pvs_path = self.module.get_bin_path('pvs')
pvs = {}
if pvs_path:
rc, pv_lines, err = self.module.run_command('%s %s' % (pvs_path, lvm_util_options))
for pv_line in pv_lines.splitlines():
items = pv_line.strip().split(',')
pvs[self._find_mapper_device_name(items[0])] = {
'size_g': items[4],
'free_g': items[5],
'vg': items[1]}
lvm_facts['lvm'] = {'lvs': lvs, 'vgs': vgs, 'pvs': pvs}
return lvm_facts
class LinuxHardwareCollector(HardwareCollector):
_platform = 'Linux'
_fact_class = LinuxHardware
required_facts = set(['platform'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
from __future__ import annotations
import json
import sys
_PY_MIN = (3, 7)
if sys.version_info < _PY_MIN:
print(json.dumps(dict(
failed=True,
msg=f"ansible-core requires a minimum of Python version {'.'.join(map(str, _PY_MIN))}. Current version: {''.join(sys.version.splitlines())}",
)))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
sys.exit(1)
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import selectors
import shlex
import shutil
import signal
import stat
import subprocess
import tempfile
import time
import traceback
import types
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
from systemd import journal, daemon as systemd_daemon
has_journal = hasattr(journal, 'sendv') and systemd_daemon.booted()
except (ImportError, AttributeError):
has_journal = False
HAVE_SELINUX = False
try:
from ansible.module_utils.compat import selinux
HAVE_SELINUX = True
except ImportError:
pass
NoneType = type(None)
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
import hashlib
def _get_available_hash_algorithms():
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
"""Return a dictionary of available hash function names and their associated function."""
try:
algorithm_names = hashlib.algorithms_available
except AttributeError:
algorithm_names = set(hashlib.algorithms)
algorithms = {}
for algorithm_name in algorithm_names:
algorithm_func = getattr(hashlib, algorithm_name, None)
if algorithm_func:
try:
algorithm_func()
except Exception:
pass
else:
algorithms[algorithm_name] = algorithm_func
return algorithms
AVAILABLE_HASH_ALGORITHMS = _get_available_hash_algorithms()
from ansible.module_utils.six.moves.collections_abc import (
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
FILE_ATTRIBUTES,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.common.parameters import (
env_fallback,
remove_values,
sanitize_keys,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
unicode
except NameError:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
unicode = text_type
try:
basestring
except NameError:
basestring = string_types
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])),
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'^[ugo]+$')
PERMS_RE = re.compile(r'^[rwxXstugo]*$')
#
#
def get_platform():
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
#
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
def heuristic_log_sanitize(data, no_log_values=None):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
''' Remove strings that look like passwords from log messages '''
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
try:
end = data.rindex('@', 0, begin)
except ValueError:
output.insert(0, data[0:begin])
break
sep = None
sep_search_end = end
while not sep:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
begin = 0
try:
sep = data.index(':', begin + 3, end)
except ValueError:
if begin == 0:
output.insert(0, data[0:prev_begin])
break
sep_search_end = begin
continue
if sep:
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
else:
if PY2:
buffer = sys.stdin.read()
else:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
print('\n{"msg": "Error: Module unable to decode stdin/parameters as valid JSON. Unable to parse what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in JSON data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleModule(object):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__)
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self.no_log_values = set()
self._check_locale()
self._load_params()
self._set_internal_properties()
self.validator = ModuleArgumentSpecValidator(self.argument_spec,
self.mutually_exclusive,
self.required_together,
self.required_one_of,
self.required_if,
self.required_by,
)
self.validation_result = self.validator.validate(self.params)
self.params.update(self.validation_result.validated_parameters)
self.no_log_values.update(self.validation_result._no_log_values)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
self.aliases.update(self.validation_result._aliases)
try:
error = self.validation_result.errors[0]
if isinstance(error, UnsupportedError) and self._ignore_unknown_opts:
error = None
except IndexError:
error = None
if error:
msg = self.validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg)
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not self.no_log:
self._log_invocation()
self._selinux_enabled = None
self._selinux_mls_enabled = None
self._selinux_initial_context = None
self._set_cwd()
@property
def tmpdir(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
def selinux_mls_enabled(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if self._selinux_mls_enabled is None:
self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1
return self._selinux_mls_enabled
def selinux_enabled(self):
if self._selinux_enabled is None:
self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1
return self._selinux_enabled
def selinux_initial_context(self):
if self._selinux_initial_context is None:
self._selinux_initial_context = [None, None, None]
if self.selinux_mls_enabled():
self._selinux_initial_context.append(None)
return self._selinux_initial_context
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
'''
Takes a path and returns its mount point
:param path: a string type with a filesystem path
:returns: the path to the mount point as a text type
'''
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
path_stat = os.lstat(b_path)
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
else:
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES,
errno.EPERM,
errno.EROFS,
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP):
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
for mode in symbolic_mode.split(','):
permlist = MODE_OPERATOR_RE.split(mode)
opers = MODE_OPERATOR_RE.findall(mode)
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
if not USERS_RE.match(users):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
raise ValueError("bad symbolic permission for mode: %s" % mode)
for idx, perms in enumerate(permlist):
if not PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask, new_mode)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask, prev_mode=None):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if prev_mode is None:
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
best_locale = get_best_parsable_locale(self)
locale.setlocale(locale.LC_ALL, best_locale)
os.environ['LANG'] = best_locale
os.environ['LC_ALL'] = best_locale
os.environ['LC_MESSAGES'] = best_locale
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
if param_key in self.params:
del self.params[param_key]
else:
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def safe_eval(self, value, locals=None, include_exceptions=False):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
return safe_eval(value, locals, include_exceptions)
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except (TypeError, ValueError) as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
for arg in log_args:
name, value = (arg.upper(), str(log_args[arg]))
if name in (
'PRIORITY', 'MESSAGE', 'MESSAGE_ID',
'CODE_FILE', 'CODE_LINE', 'CODE_FUNC',
'SYSLOG_FACILITY', 'SYSLOG_IDENTIFIER',
'SYSLOG_PID',
):
name = "_%s" % name
journal_args.append((name, value))
try:
if HAS_SYSLOG:
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
''' log that ansible ran the module '''
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d)
else:
self.deprecate(kwargs['deprecations'])
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
preserved = {}
for k, v in kwargs.items():
if v is None or isinstance(v, bool):
preserved[k] = v
kwargs = remove_values(kwargs, self.no_log_values)
kwargs.update(preserved)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,710 |
Configurable sampling/transfer of control-side task context metadata to targets
|
### Summary
We're often asked how to include arbitrary control-side contextual metadata with task invocations, and to include that metadata in target-side task log messages. e.g.: sending an AWX/Controller Job ID to the target hosts on each module invocation that occurred from that job, and logging it in the module-generated syslog/Windows Application Event Log entries for future correlation with the owning job.
I've not seen any consensus on precisely *which* data to include; one person's "critical forensic correlation data" is another's "unacceptable disclosure of sensitive execution detail". Seems like we'd need a generic facility to specify environment vars and/or hostvars to sample on the control host to be included with task invocations (under a reserved dictionary arg), and adjust the module logging APIs to include them.
My initial thought is to define a new core config element (defaulting to none) that allows the user to define a templated expression that would be rendered as part of each task's templating under a host context. The rendered result would be sent to modules as a new reserved internal module var. The module logging APIs would then include this value verbatim, when present. Other module code would also have access to the value, which could be used for anything. The new config would be settable either via ansible.cfg or an envvar, making it easier for AWX/Controller to later provide a mechanism to configure it for jobs using core versions that support it, while older versions would just silently ignore it.
Maybe something like:
```
ANSIBLE_ADDITIONAL_TASK_CONTEXT='{{awx_job_id}}'
```
When this config is non-empty, the defined template would be rendered for each task/host invocation, and its result included in a new `_ansible_additional_task_context` reserved module var. The resulting value, as with any Ansible template expression, could be of arbitrary complexity (eg, returning a data structure instead of just a scalar). The module logging APIs would include the serialized value verbatim in log messages when it is present, eg "ansible_additional_task_context=(whatever the value was)".
### Issue Type
Feature Idea
### Component Name
module invocation and logging
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81710
|
https://github.com/ansible/ansible/pull/81711
|
4208bdbbcd994251579409ad533b40c9b0543550
|
1dd0d6fad70d7d4f423dac41822da65ff9ec95ef
| 2023-09-18T16:35:01Z |
python
| 2023-11-30T18:12:55Z |
lib/ansible/module_utils/basic.py
|
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.