status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
lib/ansible/modules/cloud/amazon/elb_network_lb.py
|
#!/usr/bin/python
# Copyright: (c) 2018, Rob White (@wimnat)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: elb_network_lb
short_description: Manage a Network Load Balancer
description:
- Manage an AWS Network Elastic Load Balancer. See
U(https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/) for details.
version_added: "2.6"
requirements: [ boto3 ]
author: "Rob White (@wimnat)"
options:
cross_zone_load_balancing:
description:
- Indicates whether cross-zone load balancing is enabled.
default: false
type: bool
deletion_protection:
description:
- Indicates whether deletion protection for the ELB is enabled.
default: false
type: bool
listeners:
description:
- A list of dicts containing listeners to attach to the ELB. See examples for detail of the dict required. Note that listener keys
are CamelCased.
type: list
elements: dict
suboptions:
Port:
description: The port on which the load balancer is listening.
type: int
required: true
Protocol:
description: The protocol for connections from clients to the load balancer.
type: str
required: true
Certificates:
description: The SSL server certificate.
type: list
elements: dict
suboptions:
CertificateArn:
description: The Amazon Resource Name (ARN) of the certificate.
type: str
SslPolicy:
description: The security policy that defines which ciphers and protocols are supported.
type: str
DefaultActions:
description: The default actions for the listener.
required: true
type: list
elements: dict
suboptions:
Type:
description: The type of action.
type: str
TargetGroupArn:
description: The Amazon Resource Name (ARN) of the target group.
type: str
name:
description:
- The name of the load balancer. This name must be unique within your AWS account, can have a maximum of 32 characters, must contain only alphanumeric
characters or hyphens, and must not begin or end with a hyphen.
required: true
type: str
purge_listeners:
description:
- If I(purge_listeners=true), existing listeners will be purged from the ELB to match exactly what is defined by I(listeners) parameter.
- If the I(listeners) parameter is not set then listeners will not be modified.
default: true
type: bool
purge_tags:
description:
- If I(purge_tags=true), existing tags will be purged from the resource to match exactly what is defined by I(tags) parameter.
- If the I(tags) parameter is not set then tags will not be modified.
default: true
type: bool
subnet_mappings:
description:
- A list of dicts containing the IDs of the subnets to attach to the load balancer. You can also specify the allocation ID of an Elastic IP
to attach to the load balancer. You can specify one Elastic IP address per subnet.
- This parameter is mutually exclusive with I(subnets).
type: list
elements: dict
subnets:
description:
- A list of the IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify subnets from
at least two Availability Zones.
- Required when I(state=present).
- This parameter is mutually exclusive with I(subnet_mappings).
type: list
scheme:
description:
- Internet-facing or internal load balancer. An ELB scheme can not be modified after creation.
default: internet-facing
choices: [ 'internet-facing', 'internal' ]
type: str
state:
description:
- Create or destroy the load balancer.
- The current default is C(absent). However, this behavior is inconsistent with other modules
and as such the default will change to C(present) in 2.14.
To maintain the existing behavior explicitly set I(state=absent).
choices: [ 'present', 'absent' ]
type: str
tags:
description:
- A dictionary of one or more tags to assign to the load balancer.
type: dict
wait:
description:
- Whether or not to wait for the network load balancer to reach the desired state.
type: bool
wait_timeout:
description:
- The duration in seconds to wait, used in conjunction with I(wait).
type: int
extends_documentation_fragment:
- aws
- ec2
notes:
- Listeners are matched based on port. If a listener's port is changed then a new listener will be created.
- Listener rules are matched based on priority. If a rule's priority is changed then a new rule will be created.
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Create an ELB and attach a listener
- elb_network_lb:
name: myelb
subnets:
- subnet-012345678
- subnet-abcdef000
listeners:
- Protocol: TCP # Required. The protocol for connections from clients to the load balancer (TCP or TLS) (case-sensitive).
Port: 80 # Required. The port on which the load balancer is listening.
DefaultActions:
- Type: forward # Required. Only 'forward' is accepted at this time
TargetGroupName: mytargetgroup # Required. The name of the target group
state: present
# Create an ELB with an attached Elastic IP address
- elb_network_lb:
name: myelb
subnet_mappings:
- SubnetId: subnet-012345678
AllocationId: eipalloc-aabbccdd
listeners:
- Protocol: TCP # Required. The protocol for connections from clients to the load balancer (TCP or TLS) (case-sensitive).
Port: 80 # Required. The port on which the load balancer is listening.
DefaultActions:
- Type: forward # Required. Only 'forward' is accepted at this time
TargetGroupName: mytargetgroup # Required. The name of the target group
state: present
# Remove an ELB
- elb_network_lb:
name: myelb
state: absent
'''
RETURN = '''
availability_zones:
description: The Availability Zones for the load balancer.
returned: when state is present
type: list
sample: "[{'subnet_id': 'subnet-aabbccddff', 'zone_name': 'ap-southeast-2a', 'load_balancer_addresses': []}]"
canonical_hosted_zone_id:
description: The ID of the Amazon Route 53 hosted zone associated with the load balancer.
returned: when state is present
type: str
sample: ABCDEF12345678
created_time:
description: The date and time the load balancer was created.
returned: when state is present
type: str
sample: "2015-02-12T02:14:02+00:00"
deletion_protection_enabled:
description: Indicates whether deletion protection is enabled.
returned: when state is present
type: str
sample: true
dns_name:
description: The public DNS name of the load balancer.
returned: when state is present
type: str
sample: internal-my-elb-123456789.ap-southeast-2.elb.amazonaws.com
idle_timeout_timeout_seconds:
description: The idle timeout value, in seconds.
returned: when state is present
type: str
sample: 60
ip_address_type:
description: The type of IP addresses used by the subnets for the load balancer.
returned: when state is present
type: str
sample: ipv4
listeners:
description: Information about the listeners.
returned: when state is present
type: complex
contains:
listener_arn:
description: The Amazon Resource Name (ARN) of the listener.
returned: when state is present
type: str
sample: ""
load_balancer_arn:
description: The Amazon Resource Name (ARN) of the load balancer.
returned: when state is present
type: str
sample: ""
port:
description: The port on which the load balancer is listening.
returned: when state is present
type: int
sample: 80
protocol:
description: The protocol for connections from clients to the load balancer.
returned: when state is present
type: str
sample: HTTPS
certificates:
description: The SSL server certificate.
returned: when state is present
type: complex
contains:
certificate_arn:
description: The Amazon Resource Name (ARN) of the certificate.
returned: when state is present
type: str
sample: ""
ssl_policy:
description: The security policy that defines which ciphers and protocols are supported.
returned: when state is present
type: str
sample: ""
default_actions:
description: The default actions for the listener.
returned: when state is present
type: str
contains:
type:
description: The type of action.
returned: when state is present
type: str
sample: ""
target_group_arn:
description: The Amazon Resource Name (ARN) of the target group.
returned: when state is present
type: str
sample: ""
load_balancer_arn:
description: The Amazon Resource Name (ARN) of the load balancer.
returned: when state is present
type: str
sample: arn:aws:elasticloadbalancing:ap-southeast-2:0123456789:loadbalancer/app/my-elb/001122334455
load_balancer_name:
description: The name of the load balancer.
returned: when state is present
type: str
sample: my-elb
load_balancing_cross_zone_enabled:
description: Indicates whether cross-zone load balancing is enabled.
returned: when state is present
type: str
sample: true
scheme:
description: Internet-facing or internal load balancer.
returned: when state is present
type: str
sample: internal
state:
description: The state of the load balancer.
returned: when state is present
type: dict
sample: "{'code': 'active'}"
tags:
description: The tags attached to the load balancer.
returned: when state is present
type: dict
sample: "{
'Tag': 'Example'
}"
type:
description: The type of load balancer.
returned: when state is present
type: str
sample: network
vpc_id:
description: The ID of the VPC for the load balancer.
returned: when state is present
type: str
sample: vpc-0011223344
'''
from ansible.module_utils.aws.core import AnsibleAWSModule
from ansible.module_utils.ec2 import camel_dict_to_snake_dict, boto3_tag_list_to_ansible_dict, compare_aws_tags
from ansible.module_utils.aws.elbv2 import NetworkLoadBalancer, ELBListeners, ELBListener
def create_or_update_elb(elb_obj):
"""Create ELB or modify main attributes. json_exit here"""
if elb_obj.elb:
# ELB exists so check subnets, security groups and tags match what has been passed
# Subnets
if not elb_obj.compare_subnets():
elb_obj.modify_subnets()
# Tags - only need to play with tags if tags parameter has been set to something
if elb_obj.tags is not None:
# Delete necessary tags
tags_need_modify, tags_to_delete = compare_aws_tags(boto3_tag_list_to_ansible_dict(elb_obj.elb['tags']),
boto3_tag_list_to_ansible_dict(elb_obj.tags), elb_obj.purge_tags)
if tags_to_delete:
elb_obj.delete_tags(tags_to_delete)
# Add/update tags
if tags_need_modify:
elb_obj.modify_tags()
else:
# Create load balancer
elb_obj.create_elb()
# ELB attributes
elb_obj.update_elb_attributes()
elb_obj.modify_elb_attributes()
# Listeners
listeners_obj = ELBListeners(elb_obj.connection, elb_obj.module, elb_obj.elb['LoadBalancerArn'])
listeners_to_add, listeners_to_modify, listeners_to_delete = listeners_obj.compare_listeners()
# Delete listeners
for listener_to_delete in listeners_to_delete:
listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_delete, elb_obj.elb['LoadBalancerArn'])
listener_obj.delete()
listeners_obj.changed = True
# Add listeners
for listener_to_add in listeners_to_add:
listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_add, elb_obj.elb['LoadBalancerArn'])
listener_obj.add()
listeners_obj.changed = True
# Modify listeners
for listener_to_modify in listeners_to_modify:
listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_modify, elb_obj.elb['LoadBalancerArn'])
listener_obj.modify()
listeners_obj.changed = True
# If listeners changed, mark ELB as changed
if listeners_obj.changed:
elb_obj.changed = True
# Get the ELB again
elb_obj.update()
# Get the ELB listeners again
listeners_obj.update()
# Update the ELB attributes
elb_obj.update_elb_attributes()
# Convert to snake_case and merge in everything we want to return to the user
snaked_elb = camel_dict_to_snake_dict(elb_obj.elb)
snaked_elb.update(camel_dict_to_snake_dict(elb_obj.elb_attributes))
snaked_elb['listeners'] = []
for listener in listeners_obj.current_listeners:
snaked_elb['listeners'].append(camel_dict_to_snake_dict(listener))
# Change tags to ansible friendly dict
snaked_elb['tags'] = boto3_tag_list_to_ansible_dict(snaked_elb['tags'])
elb_obj.module.exit_json(changed=elb_obj.changed, **snaked_elb)
def delete_elb(elb_obj):
if elb_obj.elb:
elb_obj.delete()
elb_obj.module.exit_json(changed=elb_obj.changed)
def main():
argument_spec = (
dict(
cross_zone_load_balancing=dict(type='bool'),
deletion_protection=dict(type='bool'),
listeners=dict(type='list',
elements='dict',
options=dict(
Protocol=dict(type='str', required=True),
Port=dict(type='int', required=True),
SslPolicy=dict(type='str'),
Certificates=dict(type='list'),
DefaultActions=dict(type='list', required=True)
)
),
name=dict(required=True, type='str'),
purge_listeners=dict(default=True, type='bool'),
purge_tags=dict(default=True, type='bool'),
subnets=dict(type='list'),
subnet_mappings=dict(type='list'),
scheme=dict(default='internet-facing', choices=['internet-facing', 'internal']),
state=dict(choices=['present', 'absent'], type='str'),
tags=dict(type='dict'),
wait_timeout=dict(type='int'),
wait=dict(type='bool')
)
)
module = AnsibleAWSModule(argument_spec=argument_spec,
mutually_exclusive=[['subnets', 'subnet_mappings']])
# Check for subnets or subnet_mappings if state is present
state = module.params.get("state")
if state == 'present':
if module.params.get("subnets") is None and module.params.get("subnet_mappings") is None:
module.fail_json(msg="'subnets' or 'subnet_mappings' is required when state=present")
if state is None:
# See below, unless state==present we delete. Ouch.
module.deprecate('State currently defaults to absent. This is inconsistent with other modules'
' and the default will be changed to `present` in Ansible 2.14',
version='2.14')
# Quick check of listeners parameters
listeners = module.params.get("listeners")
if listeners is not None:
for listener in listeners:
for key in listener.keys():
if key == 'Protocol' and listener[key] not in ['TCP', 'TLS']:
module.fail_json(msg="'Protocol' must be either 'TCP' or 'TLS'")
connection = module.client('elbv2')
connection_ec2 = module.client('ec2')
elb = NetworkLoadBalancer(connection, connection_ec2, module)
if state == 'present':
create_or_update_elb(elb)
else:
delete_elb(elb)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
lib/ansible/modules/cloud/amazon/elb_target_group.py
|
#!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: elb_target_group
short_description: Manage a target group for an Application or Network load balancer
description:
- Manage an AWS Elastic Load Balancer target group. See
U(https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html) or
U(https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html) for details.
version_added: "2.4"
requirements: [ boto3 ]
author: "Rob White (@wimnat)"
options:
deregistration_delay_timeout:
description:
- The amount time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused.
The range is 0-3600 seconds.
type: int
health_check_protocol:
description:
- The protocol the load balancer uses when performing health checks on targets.
required: false
choices: [ 'http', 'https', 'tcp', 'HTTP', 'HTTPS', 'TCP' ]
type: str
health_check_port:
description:
- The port the load balancer uses when performing health checks on targets.
Can be set to 'traffic-port' to match target port.
- When not defined will default to the port on which each target receives traffic from the load balancer.
required: false
type: str
health_check_path:
description:
- The ping path that is the destination on the targets for health checks. The path must be defined in order to set a health check.
required: false
type: str
health_check_interval:
description:
- The approximate amount of time, in seconds, between health checks of an individual target.
required: false
type: int
health_check_timeout:
description:
- The amount of time, in seconds, during which no response from a target means a failed health check.
required: false
type: int
healthy_threshold_count:
description:
- The number of consecutive health checks successes required before considering an unhealthy target healthy.
required: false
type: int
modify_targets:
description:
- Whether or not to alter existing targets in the group to match what is passed with the module
required: false
default: yes
type: bool
name:
description:
- The name of the target group.
required: true
type: str
port:
description:
- The port on which the targets receive traffic. This port is used unless you specify a port override when registering the target. Required if
I(state) is C(present).
required: false
type: int
protocol:
description:
- The protocol to use for routing traffic to the targets. Required when I(state) is C(present).
required: false
choices: [ 'http', 'https', 'tcp', 'HTTP', 'HTTPS', 'TCP']
type: str
purge_tags:
description:
- If yes, existing tags will be purged from the resource to match exactly what is defined by I(tags) parameter. If the tag parameter is not set then
tags will not be modified.
required: false
default: yes
type: bool
state:
description:
- Create or destroy the target group.
required: true
choices: [ 'present', 'absent' ]
type: str
stickiness_enabled:
description:
- Indicates whether sticky sessions are enabled.
type: bool
stickiness_lb_cookie_duration:
description:
- The time period, in seconds, during which requests from a client should be routed to the same target. After this time period expires, the load
balancer-generated cookie is considered stale. The range is 1 second to 1 week (604800 seconds).
type: int
stickiness_type:
description:
- The type of sticky sessions. The possible value is lb_cookie.
default: lb_cookie
type: str
successful_response_codes:
description:
- The HTTP codes to use when checking for a successful response from a target.
- Accepts multiple values (for example, "200,202") or a range of values (for example, "200-299").
- Requires the I(health_check_protocol) parameter to be set.
required: false
type: str
tags:
description:
- A dictionary of one or more tags to assign to the target group.
required: false
type: dict
target_type:
description:
- The type of target that you must specify when registering targets with this target group. The possible values are
C(instance) (targets are specified by instance ID), C(ip) (targets are specified by IP address) or C(lambda) (target is specified by ARN).
Note that you can't specify targets for a target group using more than one type. Target type lambda only accept one target. When more than
one target is specified, only the first one is used. All additional targets are ignored.
If the target type is ip, specify IP addresses from the subnets of the virtual private cloud (VPC) for the target
group, the RFC 1918 range (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16), and the RFC 6598 range (100.64.0.0/10).
You can't specify publicly routable IP addresses.
required: false
default: instance
choices: ['instance', 'ip', 'lambda']
version_added: 2.5
type: str
targets:
description:
- A list of targets to assign to the target group. This parameter defaults to an empty list. Unless you set the 'modify_targets' parameter then
all existing targets will be removed from the group. The list should be an Id and a Port parameter. See the Examples for detail.
required: false
type: list
unhealthy_threshold_count:
description:
- The number of consecutive health check failures required before considering a target unhealthy.
required: false
type: int
vpc_id:
description:
- The identifier of the virtual private cloud (VPC). Required when I(state) is C(present).
required: false
type: str
wait:
description:
- Whether or not to wait for the target group.
type: bool
default: false
version_added: "2.4"
wait_timeout:
description:
- The time to wait for the target group.
default: 200
version_added: "2.4"
type: int
extends_documentation_fragment:
- aws
- ec2
notes:
- Once a target group has been created, only its health check can then be modified using subsequent calls
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Create a target group with a default health check
- elb_target_group:
name: mytargetgroup
protocol: http
port: 80
vpc_id: vpc-01234567
state: present
# Modify the target group with a custom health check
- elb_target_group:
name: mytargetgroup
protocol: http
port: 80
vpc_id: vpc-01234567
health_check_path: /
successful_response_codes: "200,250-260"
state: present
# Delete a target group
- elb_target_group:
name: mytargetgroup
state: absent
# Create a target group with instance targets
- elb_target_group:
name: mytargetgroup
protocol: http
port: 81
vpc_id: vpc-01234567
health_check_path: /
successful_response_codes: "200,250-260"
targets:
- Id: i-01234567
Port: 80
- Id: i-98765432
Port: 80
state: present
wait_timeout: 200
wait: True
# Create a target group with IP address targets
- elb_target_group:
name: mytargetgroup
protocol: http
port: 81
vpc_id: vpc-01234567
health_check_path: /
successful_response_codes: "200,250-260"
target_type: ip
targets:
- Id: 10.0.0.10
Port: 80
AvailabilityZone: all
- Id: 10.0.0.20
Port: 80
state: present
wait_timeout: 200
wait: True
# Using lambda as targets require that the target group
# itself is allow to invoke the lambda function.
# therefore you need first to create an empty target group
# to receive its arn, second, allow the target group
# to invoke the lamba function and third, add the target
# to the target group
- name: first, create empty target group
elb_target_group:
name: my-lambda-targetgroup
target_type: lambda
state: present
modify_targets: False
register: out
- name: second, allow invoke of the lambda
lambda_policy:
state: "{{ state | default('present') }}"
function_name: my-lambda-function
statement_id: someID
action: lambda:InvokeFunction
principal: elasticloadbalancing.amazonaws.com
source_arn: "{{ out.target_group_arn }}"
- name: third, add target
elb_target_group:
name: my-lambda-targetgroup
target_type: lambda
state: present
targets:
- Id: arn:aws:lambda:eu-central-1:123456789012:function:my-lambda-function
'''
RETURN = '''
deregistration_delay_timeout_seconds:
description: The amount time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused.
returned: when state present
type: int
sample: 300
health_check_interval_seconds:
description: The approximate amount of time, in seconds, between health checks of an individual target.
returned: when state present
type: int
sample: 30
health_check_path:
description: The destination for the health check request.
returned: when state present
type: str
sample: /index.html
health_check_port:
description: The port to use to connect with the target.
returned: when state present
type: str
sample: traffic-port
health_check_protocol:
description: The protocol to use to connect with the target.
returned: when state present
type: str
sample: HTTP
health_check_timeout_seconds:
description: The amount of time, in seconds, during which no response means a failed health check.
returned: when state present
type: int
sample: 5
healthy_threshold_count:
description: The number of consecutive health checks successes required before considering an unhealthy target healthy.
returned: when state present
type: int
sample: 5
load_balancer_arns:
description: The Amazon Resource Names (ARN) of the load balancers that route traffic to this target group.
returned: when state present
type: list
sample: []
matcher:
description: The HTTP codes to use when checking for a successful response from a target.
returned: when state present
type: dict
sample: {
"http_code": "200"
}
port:
description: The port on which the targets are listening.
returned: when state present
type: int
sample: 80
protocol:
description: The protocol to use for routing traffic to the targets.
returned: when state present
type: str
sample: HTTP
stickiness_enabled:
description: Indicates whether sticky sessions are enabled.
returned: when state present
type: bool
sample: true
stickiness_lb_cookie_duration_seconds:
description: The time period, in seconds, during which requests from a client should be routed to the same target.
returned: when state present
type: int
sample: 86400
stickiness_type:
description: The type of sticky sessions.
returned: when state present
type: str
sample: lb_cookie
tags:
description: The tags attached to the target group.
returned: when state present
type: dict
sample: "{
'Tag': 'Example'
}"
target_group_arn:
description: The Amazon Resource Name (ARN) of the target group.
returned: when state present
type: str
sample: "arn:aws:elasticloadbalancing:ap-southeast-2:01234567890:targetgroup/mytargetgroup/aabbccddee0044332211"
target_group_name:
description: The name of the target group.
returned: when state present
type: str
sample: mytargetgroup
unhealthy_threshold_count:
description: The number of consecutive health check failures required before considering the target unhealthy.
returned: when state present
type: int
sample: 2
vpc_id:
description: The ID of the VPC for the targets.
returned: when state present
type: str
sample: vpc-0123456
'''
import time
try:
import botocore
except ImportError:
pass # handled by AnsibleAWSModule
from ansible.module_utils.aws.core import AnsibleAWSModule
from ansible.module_utils.ec2 import (camel_dict_to_snake_dict, boto3_tag_list_to_ansible_dict, ec2_argument_spec,
compare_aws_tags, ansible_dict_to_boto3_tag_list)
from distutils.version import LooseVersion
def get_tg_attributes(connection, module, tg_arn):
try:
tg_attributes = boto3_tag_list_to_ansible_dict(connection.describe_target_group_attributes(TargetGroupArn=tg_arn)['Attributes'])
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't get target group attributes")
# Replace '.' with '_' in attribute key names to make it more Ansibley
return dict((k.replace('.', '_'), v) for k, v in tg_attributes.items())
def get_target_group_tags(connection, module, target_group_arn):
try:
return connection.describe_tags(ResourceArns=[target_group_arn])['TagDescriptions'][0]['Tags']
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't get target group tags")
def get_target_group(connection, module):
try:
target_group_paginator = connection.get_paginator('describe_target_groups')
return (target_group_paginator.paginate(Names=[module.params.get("name")]).build_full_result())['TargetGroups'][0]
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
if e.response['Error']['Code'] == 'TargetGroupNotFound':
return None
else:
module.fail_json_aws(e, msg="Couldn't get target group")
def wait_for_status(connection, module, target_group_arn, targets, status):
polling_increment_secs = 5
max_retries = (module.params.get('wait_timeout') // polling_increment_secs)
status_achieved = False
for x in range(0, max_retries):
try:
response = connection.describe_target_health(TargetGroupArn=target_group_arn, Targets=targets)
if response['TargetHealthDescriptions'][0]['TargetHealth']['State'] == status:
status_achieved = True
break
else:
time.sleep(polling_increment_secs)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't describe target health")
result = response
return status_achieved, result
def fail_if_ip_target_type_not_supported(module):
if LooseVersion(botocore.__version__) < LooseVersion('1.7.2'):
module.fail_json(msg="target_type ip requires botocore version 1.7.2 or later. Version %s is installed" %
botocore.__version__)
def create_or_update_target_group(connection, module):
changed = False
new_target_group = False
params = dict()
params['Name'] = module.params.get("name")
if module.params.get("target_type") != "lambda":
params['Protocol'] = module.params.get("protocol").upper()
params['Port'] = module.params.get("port")
params['VpcId'] = module.params.get("vpc_id")
tags = module.params.get("tags")
purge_tags = module.params.get("purge_tags")
deregistration_delay_timeout = module.params.get("deregistration_delay_timeout")
stickiness_enabled = module.params.get("stickiness_enabled")
stickiness_lb_cookie_duration = module.params.get("stickiness_lb_cookie_duration")
stickiness_type = module.params.get("stickiness_type")
health_option_keys = [
"health_check_path", "health_check_protocol", "health_check_interval", "health_check_timeout",
"healthy_threshold_count", "unhealthy_threshold_count", "successful_response_codes"
]
health_options = any([module.params[health_option_key] is not None for health_option_key in health_option_keys])
# Set health check if anything set
if health_options:
if module.params.get("health_check_protocol") is not None:
params['HealthCheckProtocol'] = module.params.get("health_check_protocol").upper()
if module.params.get("health_check_port") is not None:
params['HealthCheckPort'] = module.params.get("health_check_port")
if module.params.get("health_check_interval") is not None:
params['HealthCheckIntervalSeconds'] = module.params.get("health_check_interval")
if module.params.get("health_check_timeout") is not None:
params['HealthCheckTimeoutSeconds'] = module.params.get("health_check_timeout")
if module.params.get("healthy_threshold_count") is not None:
params['HealthyThresholdCount'] = module.params.get("healthy_threshold_count")
if module.params.get("unhealthy_threshold_count") is not None:
params['UnhealthyThresholdCount'] = module.params.get("unhealthy_threshold_count")
# Only need to check response code and path for http(s) health checks
if module.params.get("health_check_protocol") is not None and module.params.get("health_check_protocol").upper() != 'TCP':
if module.params.get("health_check_path") is not None:
params['HealthCheckPath'] = module.params.get("health_check_path")
if module.params.get("successful_response_codes") is not None:
params['Matcher'] = {}
params['Matcher']['HttpCode'] = module.params.get("successful_response_codes")
# Get target type
if module.params.get("target_type") is not None:
params['TargetType'] = module.params.get("target_type")
if params['TargetType'] == 'ip':
fail_if_ip_target_type_not_supported(module)
# Get target group
tg = get_target_group(connection, module)
if tg:
diffs = [param for param in ('Port', 'Protocol', 'VpcId')
if tg.get(param) != params.get(param)]
if diffs:
module.fail_json(msg="Cannot modify %s parameter(s) for a target group" %
", ".join(diffs))
# Target group exists so check health check parameters match what has been passed
health_check_params = dict()
# Modify health check if anything set
if health_options:
# Health check protocol
if 'HealthCheckProtocol' in params and tg['HealthCheckProtocol'] != params['HealthCheckProtocol']:
health_check_params['HealthCheckProtocol'] = params['HealthCheckProtocol']
# Health check port
if 'HealthCheckPort' in params and tg['HealthCheckPort'] != params['HealthCheckPort']:
health_check_params['HealthCheckPort'] = params['HealthCheckPort']
# Health check interval
if 'HealthCheckIntervalSeconds' in params and tg['HealthCheckIntervalSeconds'] != params['HealthCheckIntervalSeconds']:
health_check_params['HealthCheckIntervalSeconds'] = params['HealthCheckIntervalSeconds']
# Health check timeout
if 'HealthCheckTimeoutSeconds' in params and tg['HealthCheckTimeoutSeconds'] != params['HealthCheckTimeoutSeconds']:
health_check_params['HealthCheckTimeoutSeconds'] = params['HealthCheckTimeoutSeconds']
# Healthy threshold
if 'HealthyThresholdCount' in params and tg['HealthyThresholdCount'] != params['HealthyThresholdCount']:
health_check_params['HealthyThresholdCount'] = params['HealthyThresholdCount']
# Unhealthy threshold
if 'UnhealthyThresholdCount' in params and tg['UnhealthyThresholdCount'] != params['UnhealthyThresholdCount']:
health_check_params['UnhealthyThresholdCount'] = params['UnhealthyThresholdCount']
# Only need to check response code and path for http(s) health checks
if tg['HealthCheckProtocol'] != 'TCP':
# Health check path
if 'HealthCheckPath'in params and tg['HealthCheckPath'] != params['HealthCheckPath']:
health_check_params['HealthCheckPath'] = params['HealthCheckPath']
# Matcher (successful response codes)
# TODO: required and here?
if 'Matcher' in params:
current_matcher_list = tg['Matcher']['HttpCode'].split(',')
requested_matcher_list = params['Matcher']['HttpCode'].split(',')
if set(current_matcher_list) != set(requested_matcher_list):
health_check_params['Matcher'] = {}
health_check_params['Matcher']['HttpCode'] = ','.join(requested_matcher_list)
try:
if health_check_params:
connection.modify_target_group(TargetGroupArn=tg['TargetGroupArn'], **health_check_params)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't update target group")
# Do we need to modify targets?
if module.params.get("modify_targets"):
# get list of current target instances. I can't see anything like a describe targets in the doco so
# describe_target_health seems to be the only way to get them
try:
current_targets = connection.describe_target_health(
TargetGroupArn=tg['TargetGroupArn'])
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't get target group health")
if module.params.get("targets"):
if module.params.get("target_type") != "lambda":
params['Targets'] = module.params.get("targets")
# Correct type of target ports
for target in params['Targets']:
target['Port'] = int(target.get('Port', module.params.get('port')))
current_instance_ids = []
for instance in current_targets['TargetHealthDescriptions']:
current_instance_ids.append(instance['Target']['Id'])
new_instance_ids = []
for instance in params['Targets']:
new_instance_ids.append(instance['Id'])
add_instances = set(new_instance_ids) - set(current_instance_ids)
if add_instances:
instances_to_add = []
for target in params['Targets']:
if target['Id'] in add_instances:
instances_to_add.append({'Id': target['Id'], 'Port': target['Port']})
changed = True
try:
connection.register_targets(TargetGroupArn=tg['TargetGroupArn'], Targets=instances_to_add)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't register targets")
if module.params.get("wait"):
status_achieved, registered_instances = wait_for_status(connection, module, tg['TargetGroupArn'], instances_to_add, 'healthy')
if not status_achieved:
module.fail_json(msg='Error waiting for target registration to be healthy - please check the AWS console')
remove_instances = set(current_instance_ids) - set(new_instance_ids)
if remove_instances:
instances_to_remove = []
for target in current_targets['TargetHealthDescriptions']:
if target['Target']['Id'] in remove_instances:
instances_to_remove.append({'Id': target['Target']['Id'], 'Port': target['Target']['Port']})
changed = True
try:
connection.deregister_targets(TargetGroupArn=tg['TargetGroupArn'], Targets=instances_to_remove)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't remove targets")
if module.params.get("wait"):
status_achieved, registered_instances = wait_for_status(connection, module, tg['TargetGroupArn'], instances_to_remove, 'unused')
if not status_achieved:
module.fail_json(msg='Error waiting for target deregistration - please check the AWS console')
# register lambda target
else:
try:
changed = False
target = module.params.get("targets")[0]
if len(current_targets["TargetHealthDescriptions"]) == 0:
changed = True
else:
for item in current_targets["TargetHealthDescriptions"]:
if target["Id"] != item["Target"]["Id"]:
changed = True
break # only one target is possible with lambda
if changed:
if target.get("Id"):
response = connection.register_targets(
TargetGroupArn=tg['TargetGroupArn'],
Targets=[
{
"Id": target['Id']
}
]
)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(
e, msg="Couldn't register targets")
else:
if module.params.get("target_type") != "lambda":
current_instances = current_targets['TargetHealthDescriptions']
if current_instances:
instances_to_remove = []
for target in current_targets['TargetHealthDescriptions']:
instances_to_remove.append({'Id': target['Target']['Id'], 'Port': target['Target']['Port']})
changed = True
try:
connection.deregister_targets(TargetGroupArn=tg['TargetGroupArn'], Targets=instances_to_remove)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't remove targets")
if module.params.get("wait"):
status_achieved, registered_instances = wait_for_status(connection, module, tg['TargetGroupArn'], instances_to_remove, 'unused')
if not status_achieved:
module.fail_json(msg='Error waiting for target deregistration - please check the AWS console')
# remove lambda targets
else:
changed = False
if current_targets["TargetHealthDescriptions"]:
changed = True
# only one target is possible with lambda
target_to_remove = current_targets["TargetHealthDescriptions"][0]["Target"]["Id"]
if changed:
connection.deregister_targets(
TargetGroupArn=tg['TargetGroupArn'], Targets=[{"Id": target_to_remove}])
else:
try:
connection.create_target_group(**params)
changed = True
new_target_group = True
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't create target group")
tg = get_target_group(connection, module)
if module.params.get("targets"):
if module.params.get("target_type") != "lambda":
params['Targets'] = module.params.get("targets")
try:
connection.register_targets(TargetGroupArn=tg['TargetGroupArn'], Targets=params['Targets'])
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't register targets")
if module.params.get("wait"):
status_achieved, registered_instances = wait_for_status(connection, module, tg['TargetGroupArn'], params['Targets'], 'healthy')
if not status_achieved:
module.fail_json(msg='Error waiting for target registration to be healthy - please check the AWS console')
else:
try:
target = module.params.get("targets")[0]
response = connection.register_targets(
TargetGroupArn=tg['TargetGroupArn'],
Targets=[
{
"Id": target["Id"]
}
]
)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(
e, msg="Couldn't register targets")
# Now set target group attributes
update_attributes = []
# Get current attributes
current_tg_attributes = get_tg_attributes(connection, module, tg['TargetGroupArn'])
if deregistration_delay_timeout is not None:
if str(deregistration_delay_timeout) != current_tg_attributes['deregistration_delay_timeout_seconds']:
update_attributes.append({'Key': 'deregistration_delay.timeout_seconds', 'Value': str(deregistration_delay_timeout)})
if stickiness_enabled is not None:
if stickiness_enabled and current_tg_attributes['stickiness_enabled'] != "true":
update_attributes.append({'Key': 'stickiness.enabled', 'Value': 'true'})
if stickiness_lb_cookie_duration is not None:
if str(stickiness_lb_cookie_duration) != current_tg_attributes['stickiness_lb_cookie_duration_seconds']:
update_attributes.append({'Key': 'stickiness.lb_cookie.duration_seconds', 'Value': str(stickiness_lb_cookie_duration)})
if stickiness_type is not None and "stickiness_type" in current_tg_attributes:
if stickiness_type != current_tg_attributes['stickiness_type']:
update_attributes.append({'Key': 'stickiness.type', 'Value': stickiness_type})
if update_attributes:
try:
connection.modify_target_group_attributes(TargetGroupArn=tg['TargetGroupArn'], Attributes=update_attributes)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
# Something went wrong setting attributes. If this target group was created during this task, delete it to leave a consistent state
if new_target_group:
connection.delete_target_group(TargetGroupArn=tg['TargetGroupArn'])
module.fail_json_aws(e, msg="Couldn't delete target group")
# Tags - only need to play with tags if tags parameter has been set to something
if tags:
# Get tags
current_tags = get_target_group_tags(connection, module, tg['TargetGroupArn'])
# Delete necessary tags
tags_need_modify, tags_to_delete = compare_aws_tags(boto3_tag_list_to_ansible_dict(current_tags), tags, purge_tags)
if tags_to_delete:
try:
connection.remove_tags(ResourceArns=[tg['TargetGroupArn']], TagKeys=tags_to_delete)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't delete tags from target group")
changed = True
# Add/update tags
if tags_need_modify:
try:
connection.add_tags(ResourceArns=[tg['TargetGroupArn']], Tags=ansible_dict_to_boto3_tag_list(tags_need_modify))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't add tags to target group")
changed = True
# Get the target group again
tg = get_target_group(connection, module)
# Get the target group attributes again
tg.update(get_tg_attributes(connection, module, tg['TargetGroupArn']))
# Convert tg to snake_case
snaked_tg = camel_dict_to_snake_dict(tg)
snaked_tg['tags'] = boto3_tag_list_to_ansible_dict(get_target_group_tags(connection, module, tg['TargetGroupArn']))
module.exit_json(changed=changed, **snaked_tg)
def delete_target_group(connection, module):
changed = False
tg = get_target_group(connection, module)
if tg:
try:
connection.delete_target_group(TargetGroupArn=tg['TargetGroupArn'])
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't delete target group")
module.exit_json(changed=changed)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
deregistration_delay_timeout=dict(type='int'),
health_check_protocol=dict(choices=['http', 'https', 'tcp', 'HTTP', 'HTTPS', 'TCP']),
health_check_port=dict(),
health_check_path=dict(),
health_check_interval=dict(type='int'),
health_check_timeout=dict(type='int'),
healthy_threshold_count=dict(type='int'),
modify_targets=dict(default=True, type='bool'),
name=dict(required=True),
port=dict(type='int'),
protocol=dict(choices=['http', 'https', 'tcp', 'HTTP', 'HTTPS', 'TCP']),
purge_tags=dict(default=True, type='bool'),
stickiness_enabled=dict(type='bool'),
stickiness_type=dict(default='lb_cookie'),
stickiness_lb_cookie_duration=dict(type='int'),
state=dict(required=True, choices=['present', 'absent']),
successful_response_codes=dict(),
tags=dict(default={}, type='dict'),
target_type=dict(default='instance', choices=['instance', 'ip', 'lambda']),
targets=dict(type='list'),
unhealthy_threshold_count=dict(type='int'),
vpc_id=dict(),
wait_timeout=dict(type='int', default=200),
wait=dict(type='bool', default=False)
)
)
module = AnsibleAWSModule(argument_spec=argument_spec,
required_if=[
['target_type', 'instance', ['protocol', 'port', 'vpc_id']],
['target_type', 'ip', ['protocol', 'port', 'vpc_id']],
]
)
connection = module.client('elbv2')
if module.params.get('state') == 'present':
create_or_update_target_group(connection, module)
else:
delete_target_group(connection, module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
test/integration/targets/elb_network_lb/defaults/main.yml
|
---
# load balancer and target group names have to be less than 32 characters
# the 8 digit identifier at the end of resource_prefix helps determine during which test something
# was created and allows tests to be run in parallel
nlb_name: "my-nlb-{{ resource_prefix | regex_search('([0-9]+)$') }}"
tg_name: "my-tg-{{ resource_prefix | regex_search('([0-9]+)$') }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
test/integration/targets/elb_network_lb/tasks/main.yml
|
- block:
- name: set connection information for all tasks
set_fact:
aws_connection_info: &aws_connection_info
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token }}"
region: "{{ aws_region }}"
no_log: yes
- name: create certificate
iam_cert:
name: test_cert
state: present
cert: "{{ lookup('file', 'cert.pem') }}"
key: "{{ lookup('file', 'key.pem') }}"
<<: *aws_connection_info
register: cert
- name: create VPC
ec2_vpc_net:
cidr_block: 10.228.228.0/22
name: "{{ resource_prefix }}_vpc"
state: present
<<: *aws_connection_info
register: vpc
- name: create internet gateway
ec2_vpc_igw:
vpc_id: "{{ vpc.vpc.id }}"
state: present
tags:
Name: "{{ resource_prefix }}"
<<: *aws_connection_info
register: igw
- name: create subnets
ec2_vpc_subnet:
cidr: "{{ item.cidr }}"
az: "{{ aws_region}}{{ item.az }}"
vpc_id: "{{ vpc.vpc.id }}"
state: present
tags:
Created_By: "{{ resource_prefix }}"
Public: "{{ item.public }}"
<<: *aws_connection_info
with_items:
- cidr: 10.228.228.0/24
az: "a"
public: True
- cidr: 10.228.229.0/24
az: "b"
public: True
- cidr: 10.228.230.0/24
az: "a"
public: False
- cidr: 10.228.231.0/24
az: "b"
public: False
register: subnets
- ec2_vpc_subnet_info:
filters:
vpc-id: "{{ vpc.vpc.id }}"
<<: *aws_connection_info
register: vpc_subnets
- name: create list of subnet ids
set_fact:
nlb_subnets: "{{ vpc_subnets|json_query('subnets[?tags.Public == `True`].id') }}"
private_subnets: "{{ vpc_subnets|json_query('subnets[?tags.Public != `True`].id') }}"
- name: create a route table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc.id }}"
<<: *aws_connection_info
tags:
Name: igw-route
Created: "{{ resource_prefix }}"
subnets: "{{ nlb_subnets + private_subnets }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
register: route_table
- ec2_group:
name: "{{ resource_prefix }}"
description: "security group for Ansible NLB integration tests"
state: present
vpc_id: "{{ vpc.vpc.id }}"
rules:
- proto: tcp
from_port: 1
to_port: 65535
cidr_ip: 0.0.0.0/0
- proto: all
ports: 80
cidr_ip: 10.228.228.0/22
<<: *aws_connection_info
register: sec_group
- name: create a target group for testing
elb_target_group:
name: "{{ tg_name }}"
protocol: tcp
port: 80
vpc_id: "{{ vpc.vpc.id }}"
state: present
<<: *aws_connection_info
register: tg
- include_tasks: test_nlb_bad_listener_options.yml
- include_tasks: test_nlb_tags.yml
- include_tasks: test_creating_nlb.yml
- include_tasks: test_nlb_with_asg.yml
- include_tasks: test_modifying_nlb_listeners.yml
- include_tasks: test_deleting_nlb.yml
always:
- name: destroy NLB
elb_network_lb:
name: "{{ nlb_name }}"
state: absent
wait: yes
wait_timeout: 600
<<: *aws_connection_info
ignore_errors: yes
- name: destroy target group if it was created
elb_target_group:
name: "{{ tg_name }}"
protocol: tcp
port: 80
vpc_id: "{{ vpc.vpc.id }}"
state: absent
wait: yes
wait_timeout: 600
<<: *aws_connection_info
register: remove_tg
retries: 5
delay: 3
until: remove_tg is success
when: tg is defined
ignore_errors: yes
- name: destroy sec group
ec2_group:
name: "{{ sec_group.group_name }}"
description: "security group for Ansible NLB integration tests"
state: absent
vpc_id: "{{ vpc.vpc.id }}"
<<: *aws_connection_info
register: remove_sg
retries: 10
delay: 5
until: remove_sg is success
ignore_errors: yes
- name: remove route table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc.id }}"
route_table_id: "{{ route_table.route_table.route_table_id }}"
lookup: id
state: absent
<<: *aws_connection_info
register: remove_rt
retries: 10
delay: 5
until: remove_rt is success
ignore_errors: yes
- name: destroy subnets
ec2_vpc_subnet:
cidr: "{{ item.cidr }}"
vpc_id: "{{ vpc.vpc.id }}"
state: absent
<<: *aws_connection_info
register: remove_subnet
retries: 10
delay: 5
until: remove_subnet is success
with_items:
- cidr: 10.228.228.0/24
- cidr: 10.228.229.0/24
- cidr: 10.228.230.0/24
- cidr: 10.228.231.0/24
ignore_errors: yes
- name: destroy internet gateway
ec2_vpc_igw:
vpc_id: "{{ vpc.vpc.id }}"
tags:
Name: "{{ resource_prefix }}"
state: absent
<<: *aws_connection_info
register: remove_igw
retries: 10
delay: 5
until: remove_igw is success
ignore_errors: yes
- name: destroy VPC
ec2_vpc_net:
cidr_block: 10.228.228.0/22
name: "{{ resource_prefix }}_vpc"
state: absent
<<: *aws_connection_info
register: remove_vpc
retries: 10
delay: 5
until: remove_vpc is success
ignore_errors: yes
- name: destroy certificate
iam_cert:
name: test_cert
state: absent
<<: *aws_connection_info
ignore_errors: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
test/integration/targets/elb_network_lb/tasks/test_creating_nlb.yml
|
- block:
- name: set connection information for all tasks
set_fact:
aws_connection_info: &aws_connection_info
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token }}"
region: "{{ aws_region }}"
no_log: yes
- name: create NLB with listeners
elb_network_lb:
name: "{{ nlb_name }}"
subnets: "{{ nlb_subnets }}"
state: present
listeners:
- Protocol: TCP
Port: 80
DefaultActions:
- Type: forward
TargetGroupName: "{{ tg_name }}"
- Protocol: TLS
Port: 443
Certificates:
- CertificateArn: "{{ cert.arn }}"
DefaultActions:
- Type: forward
TargetGroupName: "{{ tg_name }}"
<<: *aws_connection_info
register: nlb
- assert:
that:
- nlb.changed
- nlb.listeners|length == 2
- name: test idempotence creating NLB with listeners
elb_network_lb:
name: "{{ nlb_name }}"
subnets: "{{ nlb_subnets }}"
state: present
listeners:
- Protocol: TCP
Port: 80
DefaultActions:
- Type: forward
TargetGroupName: "{{ tg_name }}"
- Protocol: TLS
Port: 443
Certificates:
- CertificateArn: "{{ cert.arn }}"
DefaultActions:
- Type: forward
TargetGroupName: "{{ tg_name }}"
<<: *aws_connection_info
register: nlb
- assert:
that:
- not nlb.changed
- nlb.listeners|length == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
test/integration/targets/elb_target/playbooks/roles/elb_target/defaults/main.yml
|
---
ec2_ami_image:
us-east-1: ami-8c1be5f6
us-east-2: ami-c5062ba0
tg_name: "ansible-test-{{ resource_prefix | regex_search('([0-9]+)$') }}-tg"
lb_name: "ansible-test-{{ resource_prefix | regex_search('([0-9]+)$') }}-lb"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
test/integration/targets/elb_target/playbooks/roles/elb_target/tasks/main.yml
|
---
- name: set up elb_target test prerequisites
block:
- name:
debug: msg="********** Setting up elb_target test dependencies **********"
# ============================================================
- name: set up aws connection info
set_fact:
aws_connection_info: &aws_connection_info
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token }}"
region: "{{ aws_region }}"
no_log: yes
# ============================================================
- name: set up testing VPC
ec2_vpc_net:
name: "{{ resource_prefix }}-vpc"
state: present
cidr_block: 20.0.0.0/16
<<: *aws_connection_info
tags:
Name: "{{ resource_prefix }}-vpc"
Description: "Created by ansible-test"
register: vpc
- name: set up testing internet gateway
ec2_vpc_igw:
vpc_id: "{{ vpc.vpc.id }}"
state: present
<<: *aws_connection_info
register: igw
- name: set up testing subnet
ec2_vpc_subnet:
state: present
vpc_id: "{{ vpc.vpc.id }}"
cidr: 20.0.0.0/18
az: "{{ aws_region }}a"
resource_tags:
Name: "{{ resource_prefix }}-subnet"
<<: *aws_connection_info
register: subnet_1
- name: set up testing subnet
ec2_vpc_subnet:
state: present
vpc_id: "{{ vpc.vpc.id }}"
cidr: 20.0.64.0/18
az: "{{ aws_region }}b"
resource_tags:
Name: "{{ resource_prefix }}-subnet"
<<: *aws_connection_info
register: subnet_2
- name: create routing rules
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc.id }}"
tags:
created: "{{ resource_prefix }}-route"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
subnets:
- "{{ subnet_1.subnet.id }}"
- "{{ subnet_2.subnet.id }}"
<<: *aws_connection_info
register: route_table
- name: create testing security group
ec2_group:
name: "{{ resource_prefix }}-sg"
description: a security group for ansible tests
vpc_id: "{{ vpc.vpc.id }}"
rules:
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
<<: *aws_connection_info
register: sg
- name: set up testing target group (type=instance)
elb_target_group:
name: "{{ tg_name }}"
health_check_port: 80
protocol: http
port: 80
vpc_id: '{{ vpc.vpc.id }}'
state: present
target_type: instance
tags:
Description: "Created by {{ resource_prefix }}"
<<: *aws_connection_info
- name: set up testing target group for ALB (type=instance)
elb_target_group:
name: "{{ tg_name }}-used"
health_check_port: 80
protocol: http
port: 80
vpc_id: '{{ vpc.vpc.id }}'
state: present
target_type: instance
tags:
Description: "Created by {{ resource_prefix }}"
<<: *aws_connection_info
- name: set up ec2 instance to use as a target
ec2:
group_id: "{{ sg.group_id }}"
instance_type: t2.micro
image: "{{ ec2_ami_image[aws_region] }}"
vpc_subnet_id: "{{ subnet_2.subnet.id }}"
instance_tags:
Name: "{{ resource_prefix }}-inst"
exact_count: 1
count_tag:
Name: "{{ resource_prefix }}-inst"
assign_public_ip: true
volumes: []
wait: true
ebs_optimized: false
user_data: |
#cloud-config
package_upgrade: true
package_update: true
packages:
- httpd
runcmd:
- "service httpd start"
- echo "HELLO ANSIBLE" > /var/www/html/index.html
<<: *aws_connection_info
register: ec2
- name: create an application load balancer
elb_application_lb:
name: "{{ lb_name }}"
security_groups:
- "{{ sg.group_id }}"
subnets:
- "{{ subnet_1.subnet.id }}"
- "{{ subnet_2.subnet.id }}"
listeners:
- Protocol: HTTP
Port: 80
DefaultActions:
- Type: forward
TargetGroupName: "{{ tg_name }}-used"
state: present
<<: *aws_connection_info
# ============================================================
- name:
debug: msg="********** Running elb_target integration tests **********"
# ============================================================
- name: register an instance to unused target group
elb_target:
target_group_name: "{{ tg_name }}"
target_id: "{{ ec2.instance_ids[0] }}"
state: present
<<: *aws_connection_info
register: result
- name: target is registered
assert:
that:
- result.changed
- result.target_group_arn
- "'{{ result.target_health_descriptions.target.id }}' == '{{ ec2.instance_ids[0] }}'"
# ============================================================
- name: test idempotence
elb_target:
target_group_name: "{{ tg_name }}"
target_id: "{{ ec2.instance_ids[0] }}"
state: present
<<: *aws_connection_info
register: result
- name: target was already registered
assert:
that:
- not result.changed
# ============================================================
- name: remove an unused target
elb_target:
target_group_name: "{{ tg_name }}"
target_id: "{{ ec2.instance_ids[0] }}"
state: absent
deregister_unused: true
<<: *aws_connection_info
register: result
- name: target group was deleted
assert:
that:
- result.changed
- not result.target_health_descriptions
# ============================================================
- name: register an instance to used target group and wait until healthy
elb_target:
target_group_name: "{{ tg_name }}-used"
target_id: "{{ ec2.instance_ids[0] }}"
state: present
target_status: healthy
target_status_timeout: 200
<<: *aws_connection_info
register: result
- name: target is registered
assert:
that:
- result.changed
- result.target_group_arn
- "'{{ result.target_health_descriptions.target.id }}' == '{{ ec2.instance_ids[0] }}'"
- "{{ result.target_health_descriptions.target_health }} == {'state': 'healthy'}"
# ============================================================
- name: remove a target from used target group
elb_target:
target_group_name: "{{ tg_name }}-used"
target_id: "{{ ec2.instance_ids[0] }}"
state: absent
target_status: unused
target_status_timeout: 400
<<: *aws_connection_info
register: result
- name: target was deregistered
assert:
that:
- result.changed
# ============================================================
- name: test idempotence
elb_target:
target_group_name: "{{ tg_name }}-used"
target_id: "{{ ec2.instance_ids[0] }}"
state: absent
<<: *aws_connection_info
register: result
- name: target was already deregistered
assert:
that:
- not result.changed
# ============================================================
- name: register an instance to used target group and wait until healthy again to test deregistering differently
elb_target:
target_group_name: "{{ tg_name }}-used"
target_id: "{{ ec2.instance_ids[0] }}"
state: present
target_status: healthy
target_status_timeout: 200
<<: *aws_connection_info
register: result
- name: target is registered
assert:
that:
- result.changed
- result.target_group_arn
- "'{{ result.target_health_descriptions.target.id }}' == '{{ ec2.instance_ids[0] }}'"
- "{{ result.target_health_descriptions.target_health }} == {'state': 'healthy'}"
- name: start deregisteration but don't wait
elb_target:
target_group_name: "{{ tg_name }}-used"
target_id: "{{ ec2.instance_ids[0] }}"
state: absent
<<: *aws_connection_info
register: result
- name: target is starting to deregister
assert:
that:
- result.changed
- result.target_health_descriptions.target_health.reason == "Target.DeregistrationInProgress"
- name: now wait for target to finish deregistering
elb_target:
target_group_name: "{{ tg_name }}-used"
target_id: "{{ ec2.instance_ids[0] }}"
state: absent
target_status: unused
target_status_timeout: 400
<<: *aws_connection_info
register: result
- name: target was deregistered already and now has finished
assert:
that:
- not result.changed
- not result.target_health_descriptions
# ============================================================
always:
- name:
debug: msg="********** Tearing down elb_target test dependencies **********"
- name: remove ec2 instance
ec2:
group_id: "{{ sg.group_id }}"
instance_type: t2.micro
image: "{{ ec2_ami_image[aws_region] }}"
vpc_subnet_id: "{{ subnet_2.subnet.id }}"
instance_tags:
Name: "{{ resource_prefix }}-inst"
exact_count: 0
count_tag:
Name: "{{ resource_prefix }}-inst"
assign_public_ip: true
volumes: []
wait: true
ebs_optimized: false
<<: *aws_connection_info
ignore_errors: true
- name: remove testing target groups
elb_target_group:
name: "{{ item }}"
health_check_port: 80
protocol: http
port: 80
vpc_id: '{{ vpc.vpc.id }}'
state: absent
target_type: instance
tags:
Description: "Created by {{ resource_prefix }}"
wait: true
wait_timeout: 200
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
with_items:
- "{{ tg_name }}"
- "{{ tg_name }}-used"
ignore_errors: true
- name: remove application load balancer
elb_application_lb:
name: "{{ lb_name }}"
security_groups:
- "{{ sg.group_id }}"
subnets:
- "{{ subnet_1.subnet.id }}"
- "{{ subnet_2.subnet.id }}"
listeners:
- Protocol: HTTP
Port: 80
DefaultActions:
- Type: forward
TargetGroupName: "{{ tg_name }}-used"
state: absent
wait: true
wait_timeout: 200
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
ignore_errors: true
- name: remove testing security group
ec2_group:
state: absent
name: "{{ resource_prefix }}-sg"
description: a security group for ansible tests
vpc_id: "{{ vpc.vpc.id }}"
rules:
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
ignore_errors: true
- name: remove routing rules
ec2_vpc_route_table:
state: absent
lookup: id
route_table_id: "{{ route_table.route_table.id }}"
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
ignore_errors: true
- name: remove testing subnet
ec2_vpc_subnet:
state: absent
vpc_id: "{{ vpc.vpc.id }}"
cidr: 20.0.0.0/18
az: "{{ aws_region }}a"
resource_tags:
Name: "{{ resource_prefix }}-subnet"
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
ignore_errors: true
- name: remove testing subnet
ec2_vpc_subnet:
state: absent
vpc_id: "{{ vpc.vpc.id }}"
cidr: 20.0.64.0/18
az: "{{ aws_region }}b"
resource_tags:
Name: "{{ resource_prefix }}-subnet"
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
ignore_errors: true
- name: remove testing internet gateway
ec2_vpc_igw:
vpc_id: "{{ vpc.vpc.id }}"
state: absent
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
ignore_errors: true
- name: remove testing VPC
ec2_vpc_net:
name: "{{ resource_prefix }}-vpc"
state: absent
cidr_block: 20.0.0.0/16
tags:
Name: "{{ resource_prefix }}-vpc"
Description: "Created by ansible-test"
<<: *aws_connection_info
register: removed
retries: 10
until: removed is not failed
# ============================================================
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,141 |
Script inventory plugin doesn't apply inventory cache config
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I set cache settings in inventory and inventory plugin script ini section according to the docs, but cache isn't created. I tried with jsonfile cache plugin.
Docs:
[https://docs.ansible.com/ansible/latest/plugins/cache.html#enabling-inventory-cache-plugins](https://docs.ansible.com/ansible/latest/plugins/cache.html#enabling-inventory-cache-plugins)
[https://docs.ansible.com/ansible/latest/plugins/inventory/script.html](https://docs.ansible.com/ansible/latest/plugins/inventory/script.html)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
<!--- !component =lib/ansible/plugins/inventory/script.py-->
inventory plugin script
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.5/site-packages/ansible
executable location = /home/vagrant/.local/bin/ansible
python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /tmp
INVENTORY_CACHE_ENABLED(/etc/ansible/ansible.cfg) = True
INVENTORY_CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /tmp
INVENTORY_CACHE_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVENTORY_PLUGIN_SCRIPT_CACHE(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --
Ubuntu 16.04
>
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Set configuration file for inventory cache with jsonfile plugin
2. Run playbook with invetory script plugin source
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
JSON inventory file in /tmp folder
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
JSON inventory file isn't created
|
https://github.com/ansible/ansible/issues/64141
|
https://github.com/ansible/ansible/pull/64151
|
1cfab26fabc4f89a8f8103a8fe35111ef6ec9ff1
|
d50fac9905b8143420ba8486d536553693dddfe6
| 2019-10-31T07:41:16Z |
python
| 2019-12-20T16:44:07Z |
changelogs/fragments/64151-remove-unsed-inventory-script-option.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,141 |
Script inventory plugin doesn't apply inventory cache config
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I set cache settings in inventory and inventory plugin script ini section according to the docs, but cache isn't created. I tried with jsonfile cache plugin.
Docs:
[https://docs.ansible.com/ansible/latest/plugins/cache.html#enabling-inventory-cache-plugins](https://docs.ansible.com/ansible/latest/plugins/cache.html#enabling-inventory-cache-plugins)
[https://docs.ansible.com/ansible/latest/plugins/inventory/script.html](https://docs.ansible.com/ansible/latest/plugins/inventory/script.html)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
<!--- !component =lib/ansible/plugins/inventory/script.py-->
inventory plugin script
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.5/site-packages/ansible
executable location = /home/vagrant/.local/bin/ansible
python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /tmp
INVENTORY_CACHE_ENABLED(/etc/ansible/ansible.cfg) = True
INVENTORY_CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /tmp
INVENTORY_CACHE_TIMEOUT(/etc/ansible/ansible.cfg) = 600
INVENTORY_PLUGIN_SCRIPT_CACHE(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --
Ubuntu 16.04
>
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Set configuration file for inventory cache with jsonfile plugin
2. Run playbook with invetory script plugin source
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
JSON inventory file in /tmp folder
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
JSON inventory file isn't created
|
https://github.com/ansible/ansible/issues/64141
|
https://github.com/ansible/ansible/pull/64151
|
1cfab26fabc4f89a8f8103a8fe35111ef6ec9ff1
|
d50fac9905b8143420ba8486d536553693dddfe6
| 2019-10-31T07:41:16Z |
python
| 2019-12-20T16:44:07Z |
lib/ansible/plugins/inventory/script.py
|
# Copyright (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
inventory: script
version_added: "2.4"
short_description: Executes an inventory script that returns JSON
options:
cache:
description: Toggle the usage of the configured Cache plugin.
default: False
type: boolean
ini:
- section: inventory_plugin_script
key: cache
env:
- name: ANSIBLE_INVENTORY_PLUGIN_SCRIPT_CACHE
always_show_stderr:
description: Toggle display of stderr even when script was successful
version_added: "2.5.1"
default: True
type: boolean
ini:
- section: inventory_plugin_script
key: always_show_stderr
env:
- name: ANSIBLE_INVENTORY_PLUGIN_SCRIPT_STDERR
description:
- The source provided must be an executable that returns Ansible inventory JSON
- The source must accept C(--list) and C(--host <hostname>) as arguments.
C(--host) will only be used if no C(_meta) key is present.
This is a performance optimization as the script would be called per host otherwise.
notes:
- Whitelisted in configuration by default.
'''
import os
import subprocess
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils.basic import json_dict_bytes_to_unicode
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable
class InventoryModule(BaseInventoryPlugin, Cacheable):
''' Host inventory parser for ansible using external inventory scripts. '''
NAME = 'script'
def __init__(self):
super(InventoryModule, self).__init__()
self._hosts = set()
def verify_file(self, path):
''' Verify if file is usable by this plugin, base does minimal accessibility check '''
valid = super(InventoryModule, self).verify_file(path)
if valid:
# not only accessible, file must be executable and/or have shebang
shebang_present = False
try:
with open(path, 'rb') as inv_file:
initial_chars = inv_file.read(2)
if initial_chars.startswith(b'#!'):
shebang_present = True
except Exception:
pass
if not os.access(path, os.X_OK) and not shebang_present:
valid = False
return valid
def parse(self, inventory, loader, path, cache=None):
super(InventoryModule, self).parse(inventory, loader, path)
self.set_options()
if cache is None:
cache = self.get_option('cache')
# Support inventory scripts that are not prefixed with some
# path information but happen to be in the current working
# directory when '.' is not in PATH.
cmd = [path, "--list"]
try:
cache_key = self._get_cache_prefix(path)
if not cache or cache_key not in self._cache:
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleParserError("problem running %s (%s)" % (' '.join(cmd), to_native(e)))
(stdout, stderr) = sp.communicate()
path = to_native(path)
err = to_native(stderr or "")
if err and not err.endswith('\n'):
err += '\n'
if sp.returncode != 0:
raise AnsibleError("Inventory script (%s) had an execution error: %s " % (path, err))
# make sure script output is unicode so that json loader will output unicode strings itself
try:
data = to_text(stdout, errors="strict")
except Exception as e:
raise AnsibleError("Inventory {0} contained characters that cannot be interpreted as UTF-8: {1}".format(path, to_native(e)))
try:
self._cache[cache_key] = self.loader.load(data)
except Exception as e:
raise AnsibleError("failed to parse executable inventory script results from {0}: {1}\n{2}".format(path, to_native(e), err))
# if no other errors happened and you want to force displaying stderr, do so now
if stderr and self.get_option('always_show_stderr'):
self.display.error(msg=to_text(err))
processed = self._cache[cache_key]
if not isinstance(processed, Mapping):
raise AnsibleError("failed to parse executable inventory script results from {0}: needs to be a json dict\n{1}".format(path, err))
group = None
data_from_meta = None
# A "_meta" subelement may contain a variable "hostvars" which contains a hash for each host
# if this "hostvars" exists at all then do not call --host for each # host.
# This is for efficiency and scripts should still return data
# if called with --host for backwards compat with 1.2 and earlier.
for (group, gdata) in processed.items():
if group == '_meta':
if 'hostvars' in gdata:
data_from_meta = gdata['hostvars']
else:
self._parse_group(group, gdata)
for host in self._hosts:
got = {}
if data_from_meta is None:
got = self.get_host_variables(path, host)
else:
try:
got = data_from_meta.get(host, {})
except AttributeError as e:
raise AnsibleError("Improperly formatted host information for %s: %s" % (host, to_native(e)), orig_exc=e)
self._populate_host_vars([host], got)
except Exception as e:
raise AnsibleParserError(to_native(e))
def _parse_group(self, group, data):
group = self.inventory.add_group(group)
if not isinstance(data, dict):
data = {'hosts': data}
# is not those subkeys, then simplified syntax, host with vars
elif not any(k in data for k in ('hosts', 'vars', 'children')):
data = {'hosts': [group], 'vars': data}
if 'hosts' in data:
if not isinstance(data['hosts'], list):
raise AnsibleError("You defined a group '%s' with bad data for the host list:\n %s" % (group, data))
for hostname in data['hosts']:
self._hosts.add(hostname)
self.inventory.add_host(hostname, group)
if 'vars' in data:
if not isinstance(data['vars'], dict):
raise AnsibleError("You defined a group '%s' with bad data for variables:\n %s" % (group, data))
for k, v in iteritems(data['vars']):
self.inventory.set_variable(group, k, v)
if group != '_meta' and isinstance(data, dict) and 'children' in data:
for child_name in data['children']:
child_name = self.inventory.add_group(child_name)
self.inventory.add_child(group, child_name)
def get_host_variables(self, path, host):
""" Runs <script> --host <hostname>, to determine additional host variables """
cmd = [path, "--host", host]
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("problem running %s (%s)" % (' '.join(cmd), e))
(out, err) = sp.communicate()
if out.strip() == '':
return {}
try:
return json_dict_bytes_to_unicode(self.loader.load(out, file_name=path))
except ValueError:
raise AnsibleError("could not parse post variable response: %s, %s" % (cmd, out))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,711 |
user.py PR45507 broke update_password="on_create"
|
##### SUMMARY
The merge of #45507 broke `update_password="on_create"` on Alpine Linux with the shadow package installed.
This does not work anymore
```yml
- name: add users
user:
name: "{{ item.name }}"
comment: "{ item.comment }}"
shell: "/bin/zsh"
password: '*'
update_password: on_create
with_items:
....
```
After the PR, all users that have a password that is not '*' will get their password changed.
Reverting bf3e397ea7922e8bd8b8ffbc05d1aede0317d119, makes 'update_password: on_create' work as expected again.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/system/user.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.7
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Alpine Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: add users
user:
name: "{{ item.name }}"
comment: "{ item.comment }}"
shell: "/bin/zsh"
password: '*'
update_password: on_create
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Users that are not created does not get their password changed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Password "hash" changed to '*'
```
|
https://github.com/ansible/ansible/issues/65711
|
https://github.com/ansible/ansible/pull/65977
|
d50fac9905b8143420ba8486d536553693dddfe6
|
18130e1419423b6ae892899d08509665b21611b2
| 2019-12-10T20:57:03Z |
python
| 2019-12-20T18:09:22Z |
changelogs/fragments/user-alpine-on-changed-fix.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,711 |
user.py PR45507 broke update_password="on_create"
|
##### SUMMARY
The merge of #45507 broke `update_password="on_create"` on Alpine Linux with the shadow package installed.
This does not work anymore
```yml
- name: add users
user:
name: "{{ item.name }}"
comment: "{ item.comment }}"
shell: "/bin/zsh"
password: '*'
update_password: on_create
with_items:
....
```
After the PR, all users that have a password that is not '*' will get their password changed.
Reverting bf3e397ea7922e8bd8b8ffbc05d1aede0317d119, makes 'update_password: on_create' work as expected again.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/system/user.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.7
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Alpine Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: add users
user:
name: "{{ item.name }}"
comment: "{ item.comment }}"
shell: "/bin/zsh"
password: '*'
update_password: on_create
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Users that are not created does not get their password changed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Password "hash" changed to '*'
```
|
https://github.com/ansible/ansible/issues/65711
|
https://github.com/ansible/ansible/pull/65977
|
d50fac9905b8143420ba8486d536553693dddfe6
|
18130e1419423b6ae892899d08509665b21611b2
| 2019-12-10T20:57:03Z |
python
| 2019-12-20T18:09:22Z |
lib/ansible/modules/system/user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to. When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
- Mutually exclusive with C(local)
type: list
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
- Mutually exclusive with C(local)
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- On other operating systems, the default shell is determined by the underlying tool being
used. See Notes for details.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a disabled account on Linux systems, set this to C('!') or C('*').
- To create a disabled account on OpenBSD, set this to C('*************').
- See U(https://docs.ansible.com/ansible/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
type: int
default: default set by ssh-keygen
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time specify a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (usermod -L, pw lock, usermod -C).
- BUT implementation differs on different platforms, this option does not always mean the user cannot login via other methods.
- This option does not disable the user, only lock the password. Do not change the password in the same task.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentification when you want to manipulate the local users
(i.e. it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
- Mutually exclusive with C(groups) and C(append)
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: authorized_key
- module: group
- module: win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
user:
name: james18
expires: -1
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups
returned: When state is 'present' and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted
returned: When state is 'absent' and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member
returned: When C(groups) is not empty and C(state) is 'present'
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory"
returned: When C(state) is 'present'
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory
returned: When C(state) is 'present' and user exists
type: bool
sample: False
name:
description: User account name
returned: always
type: str
sample: asmith
password:
description: Masked value of the password
returned: When C(state) is 'present' and C(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account
returned: When C(state) is 'absent' and user exists
type: bool
sample: True
shell:
description: User login shell
returned: When C(state) is 'present'
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key
returned: When C(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file
returned: When C(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file
returned: When C(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account
returned: When C(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account
returned: When C(UID) is passed to the module
type: int
sample: 1044
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.sys_info import get_platform_subclass
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow'
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and not self.local and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist, first create the home directory since useradd cannot
# create parent directories
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod and not self.local:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM, errno.ENOENT):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = 'C'
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r, w, e = select.select([master_out_fd, master_err_fd], [], [], 1)
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
return self.execute_command(cmd)
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
return self.execute_command(cmd)
# we have to lock/unlock the password in a distinct command
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
else:
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
else:
if len(lines) == 2:
return lines[1].strip()
else:
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _err, _out) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _err, _out) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, err, out, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-list', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, err, out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _err, _out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _err, _out)
(rc, _err, _out) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, err, out)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _err, _out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _err, _out) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.password is not None:
if info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create']),
expires=dict(type='float'),
password_lock=dict(type='bool'),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
),
supports_check_mode=True,
mutually_exclusive=[
('local', 'groups'),
('local', 'append')
]
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,711 |
user.py PR45507 broke update_password="on_create"
|
##### SUMMARY
The merge of #45507 broke `update_password="on_create"` on Alpine Linux with the shadow package installed.
This does not work anymore
```yml
- name: add users
user:
name: "{{ item.name }}"
comment: "{ item.comment }}"
shell: "/bin/zsh"
password: '*'
update_password: on_create
with_items:
....
```
After the PR, all users that have a password that is not '*' will get their password changed.
Reverting bf3e397ea7922e8bd8b8ffbc05d1aede0317d119, makes 'update_password: on_create' work as expected again.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/system/user.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.7
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Alpine Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: add users
user:
name: "{{ item.name }}"
comment: "{ item.comment }}"
shell: "/bin/zsh"
password: '*'
update_password: on_create
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Users that are not created does not get their password changed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Password "hash" changed to '*'
```
|
https://github.com/ansible/ansible/issues/65711
|
https://github.com/ansible/ansible/pull/65977
|
d50fac9905b8143420ba8486d536553693dddfe6
|
18130e1419423b6ae892899d08509665b21611b2
| 2019-12-10T20:57:03Z |
python
| 2019-12-20T18:09:22Z |
test/integration/targets/user/tasks/main.yml
|
# Test code for the user module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
## user add
- name: remove the test user
user:
name: ansibulluser
state: absent
- name: try to create a user
user:
name: ansibulluser
state: present
register: user_test0_0
- name: create the user again
user:
name: ansibulluser
state: present
register: user_test0_1
- debug:
var: user_test0
verbosity: 2
- name: make a list of users
script: userlist.sh {{ ansible_facts.distribution }}
register: user_names
- debug:
var: user_names
verbosity: 2
- name: validate results for testcase 0
assert:
that:
- user_test0_0 is changed
- user_test0_1 is not changed
- '"ansibulluser" in user_names.stdout_lines'
# test adding user with uid
# https://github.com/ansible/ansible/issues/62969
- name: remove the test user
user:
name: ansibulluser
state: absent
- name: try to create a user with uid
user:
name: ansibulluser
state: present
uid: 572
register: user_test01_0
- name: create the user again
user:
name: ansibulluser
state: present
uid: 572
register: user_test01_1
- name: validate results for testcase 0
assert:
that:
- user_test01_0 is changed
- user_test01_1 is not changed
# test user add with password
- name: add an encrypted password for user
user:
name: ansibulluser
password: "$6$rounds=656000$TT4O7jz2M57npccl$33LF6FcUMSW11qrESXL1HX0BS.bsiT6aenFLLiVpsQh6hDtI9pJh5iY7x8J7ePkN4fP8hmElidHXaeD51pbGS."
state: present
update_password: always
register: test_user_encrypt0
- name: there should not be warnings
assert:
that: "'warnings' not in test_user_encrypt0"
- block:
- name: add an plaintext password for user
user:
name: ansibulluser
password: "plaintextpassword"
state: present
update_password: always
register: test_user_encrypt1
- name: there should be a warning complains that the password is plaintext
assert:
that: "'warnings' in test_user_encrypt1"
- name: add an invalid hashed password
user:
name: ansibulluser
password: "$6$rounds=656000$tgK3gYTyRLUmhyv2$lAFrYUQwn7E6VsjPOwQwoSx30lmpiU9r/E0Al7tzKrR9mkodcMEZGe9OXD0H/clOn6qdsUnaL4zefy5fG+++++"
state: present
update_password: always
register: test_user_encrypt2
- name: there should be a warning complains about the character set of password
assert:
that: "'warnings' in test_user_encrypt2"
- name: change password to '!'
user:
name: ansibulluser
password: '!'
register: test_user_encrypt3
- name: change password to '*'
user:
name: ansibulluser
password: '*'
register: test_user_encrypt4
- name: change password to '*************'
user:
name: ansibulluser
password: '*************'
register: test_user_encrypt5
- name: there should be no warnings when setting the password to '!', '*' or '*************'
assert:
that:
- "'warnings' not in test_user_encrypt3"
- "'warnings' not in test_user_encrypt4"
- "'warnings' not in test_user_encrypt5"
when: ansible_facts.system != 'Darwin'
# https://github.com/ansible/ansible/issues/42484
# Skipping macOS for now since there is a bug when changing home directory
- block:
- name: create user specifying home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_0
- name: create user again specifying home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_1
- name: change user home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser-mod"
register: user_test3_2
- name: change user home back
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_3
- name: validate results for testcase 3
assert:
that:
- user_test3_0 is not changed
- user_test3_1 is not changed
- user_test3_2 is changed
- user_test3_3 is changed
when: ansible_facts.system != 'Darwin'
# https://github.com/ansible/ansible/issues/41393
# Create a new user account with a path that has parent directories that do not exist
- name: Create user with home path that has parents that do not exist
user:
name: ansibulluser2
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: create_home_with_no_parent_1
- name: Create user with home path that has parents that do not exist again
user:
name: ansibulluser2
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: create_home_with_no_parent_2
- name: Check the created home directory
stat:
path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: home_with_no_parent_3
- name: Ensure user with non-existing parent paths was created successfully
assert:
that:
- create_home_with_no_parent_1 is changed
- create_home_with_no_parent_1.home == user_home_prefix[ansible_facts.system] ~ '/in2deep/ansibulluser2'
- create_home_with_no_parent_2 is not changed
- home_with_no_parent_3.stat.uid == create_home_with_no_parent_1.uid
- home_with_no_parent_3.stat.gr_name == default_user_group[ansible_facts.distribution] | default('ansibulluser2')
- name: Cleanup test account
user:
name: ansibulluser2
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
state: absent
remove: yes
- name: Remove testing dir
file:
path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/"
state: absent
# https://github.com/ansible/ansible/issues/60307
# Make sure we can create a user when the home directory is missing
- name: Create user with home path that does not exist
user:
name: ansibulluser3
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/nosuchdir"
createhome: no
- name: Cleanup test account
user:
name: ansibulluser3
state: absent
remove: yes
## user check
- name: run existing user check tests
user:
name: "{{ user_names.stdout_lines | random }}"
state: present
create_home: no
loop: "{{ range(1, 5+1) | list }}"
register: user_test1
- debug:
var: user_test1
verbosity: 2
- name: validate results for testcase 1
assert:
that:
- user_test1.results is defined
- user_test1.results | length == 5
- name: validate changed results for testcase 1
assert:
that:
- "user_test1.results[0] is not changed"
- "user_test1.results[1] is not changed"
- "user_test1.results[2] is not changed"
- "user_test1.results[3] is not changed"
- "user_test1.results[4] is not changed"
- "user_test1.results[0]['state'] == 'present'"
- "user_test1.results[1]['state'] == 'present'"
- "user_test1.results[2]['state'] == 'present'"
- "user_test1.results[3]['state'] == 'present'"
- "user_test1.results[4]['state'] == 'present'"
## user remove
- name: try to delete the user
user:
name: ansibulluser
state: absent
force: true
register: user_test2
- name: make a new list of users
script: userlist.sh {{ ansible_facts.distribution }}
register: user_names2
- debug:
var: user_names2
verbosity: 2
- name: validate results for testcase 2
assert:
that:
- '"ansibulluser" not in user_names2.stdout_lines'
## create user without home and test fallback home dir create
- block:
- name: create the user
user:
name: ansibulluser
- name: delete the user and home dir
user:
name: ansibulluser
state: absent
force: true
remove: true
- name: create the user without home
user:
name: ansibulluser
create_home: no
- name: create the user home dir
user:
name: ansibulluser
register: user_create_home_fallback
- name: stat home dir
stat:
path: '{{ user_create_home_fallback.home }}'
register: user_create_home_fallback_dir
- name: read UMASK from /etc/login.defs and return mode
shell: |
import re
import os
try:
for line in open('/etc/login.defs').readlines():
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
except:
umask = os.umask(0)
mode = oct(0o777 & ~umask)
print(str(mode).replace('o', ''))
args:
executable: "{{ ansible_python_interpreter }}"
register: user_login_defs_umask
- name: validate that user home dir is created
assert:
that:
- user_create_home_fallback is changed
- user_create_home_fallback_dir.stat.exists
- user_create_home_fallback_dir.stat.isdir
- user_create_home_fallback_dir.stat.pw_name == 'ansibulluser'
- user_create_home_fallback_dir.stat.mode == user_login_defs_umask.stdout
when: ansible_facts.system != 'Darwin'
- block:
- name: create non-system user on macOS to test the shell is set to /bin/bash
user:
name: macosuser
register: macosuser_output
- name: validate the shell is set to /bin/bash
assert:
that:
- 'macosuser_output.shell == "/bin/bash"'
- name: cleanup
user:
name: macosuser
state: absent
- name: create system user on macos to test the shell is set to /usr/bin/false
user:
name: macosuser
system: yes
register: macosuser_output
- name: validate the shell is set to /usr/bin/false
assert:
that:
- 'macosuser_output.shell == "/usr/bin/false"'
- name: cleanup
user:
name: macosuser
state: absent
- name: create non-system user on macos and set the shell to /bin/sh
user:
name: macosuser
shell: /bin/sh
register: macosuser_output
- name: validate the shell is set to /bin/sh
assert:
that:
- 'macosuser_output.shell == "/bin/sh"'
- name: cleanup
user:
name: macosuser
state: absent
when: ansible_facts.distribution == "MacOSX"
## user expires
# Date is March 3, 2050
- name: Set user expiration
user:
name: ansibulluser
state: present
expires: 2529881062
register: user_test_expires1
tags:
- timezone
- name: Set user expiration again to ensure no change is made
user:
name: ansibulluser
state: present
expires: 2529881062
register: user_test_expires2
tags:
- timezone
- name: Ensure that account with expiration was created and did not change on subsequent run
assert:
that:
- user_test_expires1 is changed
- user_test_expires2 is not changed
- name: Verify expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
that:
- getent_shadow['ansibulluser'][6] == '29281'
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
that:
- bsd_account_expiration.stdout == '2529881062'
when: ansible_facts.os_family == 'FreeBSD'
- name: Change timezone
timezone:
name: America/Denver
register: original_timezone
tags:
- timezone
- name: Change system timezone to make sure expiration comparison works properly
block:
- name: Create user with expiration again to ensure no change is made in a new timezone
user:
name: ansibulluser
state: present
expires: 2529881062
register: user_test_different_tz
tags:
- timezone
- name: Ensure that no change was reported
assert:
that:
- user_test_different_tz is not changed
tags:
- timezone
always:
- name: Restore original timezone - {{ original_timezone.diff.before.name }}
timezone:
name: "{{ original_timezone.diff.before.name }}"
when: original_timezone.diff.before.name != "n/a"
tags:
- timezone
- name: Restore original timezone when n/a
file:
path: /etc/sysconfig/clock
state: absent
when:
- original_timezone.diff.before.name == "n/a"
- "'/etc/sysconfig/clock' in original_timezone.msg"
tags:
- timezone
- name: Unexpire user
user:
name: ansibulluser
state: present
expires: -1
register: user_test_expires3
- name: Verify un expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify un expiration date for Linux/BSD
block:
- name: Unexpire user again to check for change
user:
name: ansibulluser
state: present
expires: -1
register: user_test_expires4
- name: Ensure first expiration reported a change and second did not
assert:
msg: The second run of the expiration removal task reported a change when it should not
that:
- user_test_expires3 is changed
- user_test_expires4 is not changed
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse', 'FreeBSD']
- name: Verify un expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}"
that:
- bsd_account_expiration.stdout == '0'
when: ansible_facts.os_family == 'FreeBSD'
# Test setting no expiration when creating a new account
# https://github.com/ansible/ansible/issues/44155
- name: Remove ansibulluser
user:
name: ansibulluser
state: absent
- name: Create user account without expiration
user:
name: ansibulluser
state: present
expires: -1
register: user_test_create_no_expires_1
- name: Create user account without expiration again
user:
name: ansibulluser
state: present
expires: -1
register: user_test_create_no_expires_2
- name: Ensure changes were made appropriately
assert:
msg: Setting 'expires='-1 resulted in incorrect changes
that:
- user_test_create_no_expires_1 is changed
- user_test_create_no_expires_2 is not changed
- name: Verify un expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify un expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}"
that:
- bsd_account_expiration.stdout == '0'
when: ansible_facts.os_family == 'FreeBSD'
# Test setting epoch 0 expiration when creating a new account, then removing the expiry
# https://github.com/ansible/ansible/issues/47114
- name: Remove ansibulluser
user:
name: ansibulluser
state: absent
- name: Create user account with epoch 0 expiration
user:
name: ansibulluser
state: present
expires: 0
register: user_test_expires_create0_1
- name: Create user account with epoch 0 expiration again
user:
name: ansibulluser
state: present
expires: 0
register: user_test_expires_create0_2
- name: Change the user account to remove the expiry time
user:
name: ansibulluser
expires: -1
register: user_test_remove_expires_1
- name: Change the user account to remove the expiry time again
user:
name: ansibulluser
expires: -1
register: user_test_remove_expires_2
- name: Verify un expiration date for Linux
block:
- name: LINUX | Ensure changes were made appropriately
assert:
msg: Creating an account with 'expries=0' then removing that expriation with 'expires=-1' resulted in incorrect changes
that:
- user_test_expires_create0_1 is changed
- user_test_expires_create0_2 is not changed
- user_test_remove_expires_1 is changed
- user_test_remove_expires_2 is not changed
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify proper expiration behavior for BSD
block:
- name: BSD | Ensure changes were made appropriately
assert:
msg: Creating an account with 'expries=0' then removing that expriation with 'expires=-1' resulted in incorrect changes
that:
- user_test_expires_create0_1 is changed
- user_test_expires_create0_2 is not changed
- user_test_remove_expires_1 is not changed
- user_test_remove_expires_2 is not changed
when: ansible_facts.os_family == 'FreeBSD'
# Test expiration with a very large negative number. This should have the same
# result as setting -1.
- name: Set expiration date using very long negative number
user:
name: ansibulluser
state: present
expires: -2529881062
register: user_test_expires5
- name: Ensure no change was made
assert:
that:
- user_test_expires5 is not changed
- name: Verify un expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify un expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}"
that:
- bsd_account_expiration.stdout == '0'
when: ansible_facts.os_family == 'FreeBSD'
## shadow backup
- block:
- name: Create a user to test shadow file backup
user:
name: ansibulluser
state: present
register: result
- name: Find shadow backup files
find:
path: /etc
patterns: 'shadow\..*~$'
use_regex: yes
register: shadow_backups
- name: Assert that a backup file was created
assert:
that:
- result.bakup
- shadow_backups.files | map(attribute='path') | list | length > 0
when: ansible_facts.os_family == 'Solaris'
# Test creating ssh key with passphrase
- name: Remove ansibulluser
user:
name: ansibulluser
state: absent
- name: Create user with ssh key
user:
name: ansibulluser
state: present
generate_ssh_key: yes
ssh_key_file: "{{ output_dir }}/test_id_rsa"
ssh_key_passphrase: secret_passphrase
- name: Unlock ssh key
command: "ssh-keygen -y -f {{ output_dir }}/test_id_rsa -P secret_passphrase"
register: result
- name: Check that ssh key was unlocked successfully
assert:
that:
- result.rc == 0
- name: Clean ssh key
file:
path: "{{ output_dir }}/test_id_rsa"
state: absent
when: ansible_os_family == 'FreeBSD'
## password lock
- block:
- name: Set password for ansibulluser
user:
name: ansibulluser
password: "$6$rounds=656000$TT4O7jz2M57npccl$33LF6FcUMSW11qrESXL1HX0BS.bsiT6aenFLLiVpsQh6hDtI9pJh5iY7x8J7ePkN4fP8hmElidHXaeD51pbGS."
- name: Lock account
user:
name: ansibulluser
password_lock: yes
register: password_lock_1
- name: Lock account again
user:
name: ansibulluser
password_lock: yes
register: password_lock_2
- name: Unlock account
user:
name: ansibulluser
password_lock: no
register: password_lock_3
- name: Unlock account again
user:
name: ansibulluser
password_lock: no
register: password_lock_4
- name: Ensure task reported changes appropriately
assert:
msg: The password_lock tasks did not make changes appropriately
that:
- password_lock_1 is changed
- password_lock_2 is not changed
- password_lock_3 is changed
- password_lock_4 is not changed
- name: Lock account
user:
name: ansibulluser
password_lock: yes
- name: Verify account lock for BSD
block:
- name: BSD | Get account status
shell: "{{ status_command[ansible_facts['system']] }}"
register: account_status_locked
- name: Unlock account
user:
name: ansibulluser
password_lock: no
- name: BSD | Get account status
shell: "{{ status_command[ansible_facts['system']] }}"
register: account_status_unlocked
- name: FreeBSD | Ensure account is locked
assert:
that:
- "'LOCKED' in account_status_locked.stdout"
- "'LOCKED' not in account_status_unlocked.stdout"
when: ansible_facts['system'] == 'FreeBSD'
when: ansible_facts['system'] in ['FreeBSD', 'OpenBSD']
- name: Verify account lock for Linux
block:
- name: LINUX | Get account status
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure account is locked
assert:
that:
- getent_shadow['ansibulluser'][0].startswith('!')
- name: Unlock account
user:
name: ansibulluser
password_lock: no
- name: LINUX | Get account status
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure account is unlocked
assert:
that:
- not getent_shadow['ansibulluser'][0].startswith('!')
when: ansible_facts['system'] == 'Linux'
always:
- name: Unlock account
user:
name: ansibulluser
password_lock: no
when: ansible_facts['system'] in ['FreeBSD', 'OpenBSD', 'Linux']
## Check local mode
# Even if we don't have a system that is bound to a directory, it's useful
# to run with local: true to exercise the code path that reads through the local
# user database file.
# https://github.com/ansible/ansible/issues/50947
- name: Create /etc/gshadow
file:
path: /etc/gshadow
state: touch
when: ansible_facts.os_family == 'Suse'
tags:
- user_test_local_mode
- name: Create /etc/libuser.conf
file:
path: /etc/libuser.conf
state: touch
when:
- ansible_facts.distribution == 'Ubuntu'
- ansible_facts.distribution_major_version is version_compare('16', '==')
tags:
- user_test_local_mode
- name: Ensure luseradd is present
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: libuser
state: present
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
- name: Create local account that already exists to check for warning
user:
name: root
local: yes
register: local_existing
tags:
- user_test_local_mode
- name: Create local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_1
tags:
- user_test_local_mode
- name: Create local_ansibulluser again
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_2
tags:
- user_test_local_mode
- name: Remove local_ansibulluser
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_1
tags:
- user_test_local_mode
- name: Remove local_ansibulluser again
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_2
tags:
- user_test_local_mode
- name: Create test group
group:
name: testgroup
tags:
- user_test_local_mode
- name: Create local_ansibulluser with groups
user:
name: local_ansibulluser
state: present
local: yes
groups: testgroup
register: local_user_test_3
ignore_errors: yes
tags:
- user_test_local_mode
- name: Append groups for local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
append: yes
register: local_user_test_4
ignore_errors: yes
tags:
- user_test_local_mode
- name: Ensure local user accounts were created and removed properly
assert:
that:
- local_user_test_1 is changed
- local_user_test_2 is not changed
- local_user_test_3 is failed
- "local_user_test_3['msg'] is search('parameters are mutually exclusive: groups|local')"
- local_user_test_4 is failed
- "local_user_test_4['msg'] is search('parameters are mutually exclusive: groups|append')"
- local_user_test_remove_1 is changed
- local_user_test_remove_2 is not changed
tags:
- user_test_local_mode
- name: Ensure warnings were displayed properly
assert:
that:
- local_user_test_1['warnings'] | length > 0
- local_user_test_1['warnings'] | first is search('The local user account may already exist')
- local_existing['warnings'] is not defined
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,907 |
ansible-test codecoverage vs Collections: ERROR: 'CoverageData' object has no attribute 'read_file'
|
##### SUMMARY
```
ansible-test units --docker -v --color --python 3.6 --coverage
ansible-test coverage xml -v --requirements --group-by command --group-by version ${stub:+"$stub"}
```
fails with
```
ERROR: 'CoverageData' object has no attribute 'read_file'
```
I can see that `ansible-test coverage` is finding the files, such as: `ansible_collections/gundalow_collection/grafana/tests/output/coverage/integration=grafana_datasource=local-3.7=python-3.7=coverage.jobarker.remote.csb.18264.626864`
This maybe an issue in my setup, or ansible-test.
Steps to reproduce https://github.com/ansible-collections/grafana/pull/20
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
##### EXPECTED RESULTS
##### ACTUAL RESULTS
```paste below
ERROR: 'CoverageData' object has no attribute 'read_file'
```
|
https://github.com/ansible/ansible/issues/65907
|
https://github.com/ansible/ansible/pull/65999
|
2b6d94c4c85a4fffeb592f058dcdd37bb05893cc
|
9ea5b539b60cb7035f08ac17688976a8e6dfb126
| 2019-12-17T11:25:07Z |
python
| 2019-12-20T19:55:54Z |
changelogs/fragments/ansible-test-coverage-constraint.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,907 |
ansible-test codecoverage vs Collections: ERROR: 'CoverageData' object has no attribute 'read_file'
|
##### SUMMARY
```
ansible-test units --docker -v --color --python 3.6 --coverage
ansible-test coverage xml -v --requirements --group-by command --group-by version ${stub:+"$stub"}
```
fails with
```
ERROR: 'CoverageData' object has no attribute 'read_file'
```
I can see that `ansible-test coverage` is finding the files, such as: `ansible_collections/gundalow_collection/grafana/tests/output/coverage/integration=grafana_datasource=local-3.7=python-3.7=coverage.jobarker.remote.csb.18264.626864`
This maybe an issue in my setup, or ansible-test.
Steps to reproduce https://github.com/ansible-collections/grafana/pull/20
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
##### EXPECTED RESULTS
##### ACTUAL RESULTS
```paste below
ERROR: 'CoverageData' object has no attribute 'read_file'
```
|
https://github.com/ansible/ansible/issues/65907
|
https://github.com/ansible/ansible/pull/65999
|
2b6d94c4c85a4fffeb592f058dcdd37bb05893cc
|
9ea5b539b60cb7035f08ac17688976a8e6dfb126
| 2019-12-17T11:25:07Z |
python
| 2019-12-20T19:55:54Z |
test/lib/ansible_test/_data/requirements/constraints.txt
|
coverage >= 4.2, != 4.3.2 ; python_version <= '3.7' # features in 4.2+ required, avoid known bug in 4.3.2 on python 2.6
coverage >= 4.5.4 ; python_version > '3.7' # coverage had a bug in < 4.5.4 that would cause unit tests to hang in Python 3.8
cryptography < 2.2 ; python_version < '2.7' # cryptography 2.2 drops support for python 2.6
deepdiff < 4.0.0 ; python_version < '3' # deepdiff 4.0.0 and later require python 3
urllib3 < 1.24 ; python_version < '2.7' # urllib3 1.24 and later require python 2.7 or later
pywinrm >= 0.3.0 # message encryption support
sphinx < 1.6 ; python_version < '2.7' # sphinx 1.6 and later require python 2.7 or later
sphinx < 1.8 ; python_version >= '2.7' # sphinx 1.8 and later are currently incompatible with rstcheck 3.3
pygments >= 2.4.0 # Pygments 2.4.0 includes bugfixes for YAML and YAML+Jinja lexers
wheel < 0.30.0 ; python_version < '2.7' # wheel 0.30.0 and later require python 2.7 or later
yamllint != 1.8.0, < 1.14.0 ; python_version < '2.7' # yamllint 1.8.0 and 1.14.0+ require python 2.7+
pycrypto >= 2.6 # Need features found in 2.6 and greater
ncclient >= 0.5.2 # Need features added in 0.5.2 and greater
idna < 2.6, >= 2.5 # linode requires idna < 2.9, >= 2.5, requests requires idna < 2.6, but cryptography will cause the latest version to be installed instead
paramiko < 2.4.0 ; python_version < '2.7' # paramiko 2.4.0 drops support for python 2.6
paramiko < 2.5.0 ; python_version >= '2.7' # paramiko 2.5.0 requires cryptography 2.5.0+
pytest < 3.3.0 ; python_version < '2.7' # pytest 3.3.0 drops support for python 2.6
pytest < 5.0.0 ; python_version == '2.7' # pytest 5.0.0 and later will no longer support python 2.7
pytest-forked < 1.0.2 ; python_version < '2.7' # pytest-forked 1.0.2 and later require python 2.7 or later
pytest-forked >= 1.0.2 ; python_version >= '2.7' # pytest-forked before 1.0.2 does not work with pytest 4.2.0+ (which requires python 2.7+)
ntlm-auth >= 1.3.0 # message encryption support using cryptography
requests < 2.20.0 ; python_version < '2.7' # requests 2.20.0 drops support for python 2.6
requests-ntlm >= 1.1.0 # message encryption support
requests-credssp >= 0.1.0 # message encryption support
voluptuous >= 0.11.0 # Schema recursion via Self
openshift >= 0.6.2, < 0.9.0 # merge_type support
virtualenv < 16.0.0 ; python_version < '2.7' # virtualenv 16.0.0 and later require python 2.7 or later
pathspec < 0.6.0 ; python_version < '2.7' # pathspec 0.6.0 and later require python 2.7 or later
pyopenssl < 18.0.0 ; python_version < '2.7' # pyOpenSSL 18.0.0 and later require python 2.7 or later
pyfmg == 0.6.1 # newer versions do not pass current unit tests
pyyaml < 5.1 ; python_version < '2.7' # pyyaml 5.1 and later require python 2.7 or later
pycparser < 2.19 ; python_version < '2.7' # pycparser 2.19 and later require python 2.7 or later
mock >= 2.0.0 # needed for features backported from Python 3.6 unittest.mock (assert_called, assert_called_once...)
pytest-mock >= 1.4.0 # needed for mock_use_standalone_module pytest option
xmltodict < 0.12.0 ; python_version < '2.7' # xmltodict 0.12.0 and later require python 2.7 or later
lxml < 4.3.0 ; python_version < '2.7' # lxml 4.3.0 and later require python 2.7 or later
pyvmomi < 6.0.0 ; python_version < '2.7' # pyvmomi 6.0.0 and later require python 2.7 or later
pyone == 1.1.9 # newer versions do not pass current integration tests
botocore >= 1.10.0 # adds support for the following AWS services: secretsmanager, fms, and acm-pca
# freeze pylint and its requirements for consistent test results
astroid == 2.2.5
isort == 4.3.15
lazy-object-proxy == 1.3.1
mccabe == 0.6.1
pylint == 2.3.1
typed-ast == 1.4.0 # 1.4.0 is required to compile on Python 3.8
wrapt == 1.11.1
semantic_version == 2.6.0 # newer versions are not supported on Python 2.6
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,770 |
nmcli not working on Ubuntu: `No package matching 'libnm-glib-dev' is available`
|
##### SUMMARY
The [docu](https://docs.ansible.com/ansible/latest/modules/nmcli_module.html) lists some requirements to be installed:
- network-manager
- python3-dbus
- libnm-glib-dev
however there is no package`libnm-glib-dev` available in Ubuntu and it is not clear how to install the required libraries.
```bash
fatal: [node001]: FAILED! => {"changed": false, "msg": "No package matching 'libnm-glib-dev' is available"}
```
Removing installation of `libnm-glib-dev` will lead to a subsequent error
```bash
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: Namespace NMClient not available
fatal: [node001]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (NetworkManager glib API) on node001's Python /usr/biNo package matching 'libnm-glib-dev' is availablen/python3. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
component nmcli
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = /home/aedu/ansible.cfg
configured module search path = ['/home/aedu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0]
```
##### CONFIGURATION
```
DEFAULT_HOST_LIST(/home/aedu/ansible.cfg) = ['/home/aedu/inventory.yml']
DEFAULT_REMOTE_USER(/home/aedu/ansible.cfg) = ansible
DEFAULT_ROLES_PATH(/home/aedu/ansible.cfg) = ['/home/aedu/roles']
DEFAULT_TIMEOUT(/home/aedu/ansible.cfg) = 10
DEFAULT_VAULT_PASSWORD_FILE(/home/aedu/ansible.cfg) = /home/aedu/.ssh/wyssmann
RETRY_FILES_SAVE_PATH(/home/aedu/ansible.cfg) = /home/aedu/.ansible-retry
```
##### OS / ENVIRONMENT
```bash
# uname -a
Linux node001 5.3.0-19-generic #20-Ubuntu SMP Fri Oct 18 09:04:39 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/os-release
NAME="Ubuntu"
VERSION="19.10 (Eoan Ermine)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 19.10"
VERSION_ID="19.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=eoan
UBUNTU_CODENAME=eoan
```
##### STEPS TO REPRODUCE
As using python3 the packages are slighty different I've updated these
```yaml
- name: install needed network manager libs
package:
name:
- network-manager
- python3-dbus
- libnm-glib-dev
state: present
- name: Configure {{ iface.name }}
nmcli:
conn_name: '{{ iface.name }}'
ifname: '{{ iface.name }}'
type: ethernet
ip4: '{{ ipv4.address }}'
gw4: '{{ ipv4.gateway }}'
ip6: '{{ ipv6.address }}'
gw6: '{{ ipv6.gateway }}'
state: present
```
##### EXPECTED RESULTS
nmcli works fine
##### ACTUAL RESULTS
```bash
fatal: [node001]: FAILED! => {"changed": false, "msg": "No package matching 'libnm-glib-dev' is available"}
```
|
https://github.com/ansible/ansible/issues/65770
|
https://github.com/ansible/ansible/pull/65726
|
7ee3103a86f0179faead92d2481ff175d9a69747
|
663171e21820f1696b93c3f182626b0fd006b61b
| 2019-12-12T16:43:52Z |
python
| 2019-12-21T05:58:58Z |
lib/ansible/modules/net_tools/nmcli.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Chris Long <[email protected]> <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: nmcli
author:
- Chris Long (@alcamie101)
short_description: Manage Networking
requirements:
- dbus
- NetworkManager-glib
- nmcli
version_added: "2.0"
description:
- Manage the network devices. Create, modify and manage various connection and device type e.g., ethernet, teams, bonds, vlans etc.
- 'On CentOS and Fedora like systems, the requirements can be met by installing the following packages: NetworkManager-glib,
libnm-qt-devel.x86_64, nm-connection-editor.x86_64, libsemanage-python, policycoreutils-python.'
- 'On Ubuntu and Debian like systems, the requirements can be met by installing the following packages: network-manager,
python-dbus (or python3-dbus, depending on the Python version in use), libnm-glib-dev.'
- 'On openSUSE, the requirements can be met by installing the following packages: NetworkManager, python2-dbus-python (or
python3-dbus-python), typelib-1_0-NMClient-1_0 and typelib-1_0-NetworkManager-1_0.'
options:
state:
description:
- Whether the device should exist or not, taking action if the state is different from what is stated.
type: str
required: true
choices: [ absent, present ]
autoconnect:
description:
- Whether the connection should start on boot.
- Whether the connection profile can be automatically activated
type: bool
default: yes
conn_name:
description:
- The name used to call the connection. Pattern is <type>[-<ifname>][-<num>].
type: str
required: true
ifname:
description:
- The interface to bind the connection to.
- The connection will only be applicable to this interface name.
- A special value of C('*') can be used for interface-independent connections.
- The ifname argument is mandatory for all connection types except bond, team, bridge and vlan.
- This parameter defaults to C(conn_name) when left unset.
type: str
type:
description:
- This is the type of device or network connection that you wish to create or modify.
- Type C(generic) is added in Ansible 2.5.
type: str
choices: [ bond, bond-slave, bridge, bridge-slave, ethernet, generic, ipip, sit, team, team-slave, vlan, vxlan ]
mode:
description:
- This is the type of device or network connection that you wish to create for a bond, team or bridge.
type: str
choices: [ 802.3ad, active-backup, balance-alb, balance-rr, balance-tlb, balance-xor, broadcast ]
default: balance-rr
master:
description:
- Master <master (ifname, or connection UUID or conn_name) of bridge, team, bond master connection profile.
type: str
ip4:
description:
- The IPv4 address to this interface.
- Use the format C(192.0.2.24/24).
type: str
gw4:
description:
- The IPv4 gateway for this interface.
- Use the format C(192.0.2.1).
type: str
dns4:
description:
- A list of up to 3 dns servers.
- IPv4 format e.g. to add two IPv4 DNS server addresses, use C(192.0.2.53 198.51.100.53).
type: list
dns4_search:
description:
- A list of DNS search domains.
type: list
version_added: '2.5'
ip6:
description:
- The IPv6 address to this interface.
- Use the format C(abbe::cafe).
type: str
gw6:
description:
- The IPv6 gateway for this interface.
- Use the format C(2001:db8::1).
type: str
dns6:
description:
- A list of up to 3 dns servers.
- IPv6 format e.g. to add two IPv6 DNS server addresses, use C(2001:4860:4860::8888 2001:4860:4860::8844).
type: list
dns6_search:
description:
- A list of DNS search domains.
type: list
version_added: '2.5'
mtu:
description:
- The connection MTU, e.g. 9000. This can't be applied when creating the interface and is done once the interface has been created.
- Can be used when modifying Team, VLAN, Ethernet (Future plans to implement wifi, pppoe, infiniband)
- This parameter defaults to C(1500) when unset.
type: int
dhcp_client_id:
description:
- DHCP Client Identifier sent to the DHCP server.
type: str
version_added: "2.5"
primary:
description:
- This is only used with bond and is the primary interface name (for "active-backup" mode), this is the usually the 'ifname'.
type: str
miimon:
description:
- This is only used with bond - miimon.
- This parameter defaults to C(100) when unset.
type: int
downdelay:
description:
- This is only used with bond - downdelay.
type: int
updelay:
description:
- This is only used with bond - updelay.
type: int
arp_interval:
description:
- This is only used with bond - ARP interval.
type: int
arp_ip_target:
description:
- This is only used with bond - ARP IP target.
type: str
stp:
description:
- This is only used with bridge and controls whether Spanning Tree Protocol (STP) is enabled for this bridge.
type: bool
default: yes
priority:
description:
- This is only used with 'bridge' - sets STP priority.
type: int
default: 128
forwarddelay:
description:
- This is only used with bridge - [forward-delay <2-30>] STP forwarding delay, in seconds.
type: int
default: 15
hellotime:
description:
- This is only used with bridge - [hello-time <1-10>] STP hello time, in seconds.
type: int
default: 2
maxage:
description:
- This is only used with bridge - [max-age <6-42>] STP maximum message age, in seconds.
type: int
default: 20
ageingtime:
description:
- This is only used with bridge - [ageing-time <0-1000000>] the Ethernet MAC address aging time, in seconds.
type: int
default: 300
mac:
description:
- This is only used with bridge - MAC address of the bridge.
- Note this requires a recent kernel feature, originally introduced in 3.15 upstream kernel.
slavepriority:
description:
- This is only used with 'bridge-slave' - [<0-63>] - STP priority of this slave.
type: int
default: 32
path_cost:
description:
- This is only used with 'bridge-slave' - [<1-65535>] - STP port cost for destinations via this slave.
type: int
default: 100
hairpin:
description:
- This is only used with 'bridge-slave' - 'hairpin mode' for the slave, which allows frames to be sent back out through the slave the
frame was received on.
type: bool
default: yes
vlanid:
description:
- This is only used with VLAN - VLAN ID in range <0-4095>.
type: int
vlandev:
description:
- This is only used with VLAN - parent device this VLAN is on, can use ifname.
type: str
flags:
description:
- This is only used with VLAN - flags.
type: str
ingress:
description:
- This is only used with VLAN - VLAN ingress priority mapping.
type: str
egress:
description:
- This is only used with VLAN - VLAN egress priority mapping.
type: str
vxlan_id:
description:
- This is only used with VXLAN - VXLAN ID.
type: int
version_added: "2.8"
vxlan_remote:
description:
- This is only used with VXLAN - VXLAN destination IP address.
type: str
version_added: "2.8"
vxlan_local:
description:
- This is only used with VXLAN - VXLAN local IP address.
type: str
version_added: "2.8"
ip_tunnel_dev:
description:
- This is used with IPIP/SIT - parent device this IPIP/SIT tunnel, can use ifname.
type: str
version_added: "2.8"
ip_tunnel_remote:
description:
- This is used with IPIP/SIT - IPIP/SIT destination IP address.
type: str
version_added: "2.8"
ip_tunnel_local:
description:
- This is used with IPIP/SIT - IPIP/SIT local IP address.
type: str
version_added: "2.8"
'''
EXAMPLES = r'''
# These examples are using the following inventory:
#
# ## Directory layout:
#
# |_/inventory/cloud-hosts
# | /group_vars/openstack-stage.yml
# | /host_vars/controller-01.openstack.host.com
# | /host_vars/controller-02.openstack.host.com
# |_/playbook/library/nmcli.py
# | /playbook-add.yml
# | /playbook-del.yml
# ```
#
# ## inventory examples
# ### groups_vars
# ```yml
# ---
# #devops_os_define_network
# storage_gw: "192.0.2.254"
# external_gw: "198.51.100.254"
# tenant_gw: "203.0.113.254"
#
# #Team vars
# nmcli_team:
# - conn_name: tenant
# ip4: '{{ tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: external
# ip4: '{{ external_ip }}'
# gw4: '{{ external_gw }}'
# - conn_name: storage
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# nmcli_team_slave:
# - conn_name: em1
# ifname: em1
# master: tenant
# - conn_name: em2
# ifname: em2
# master: tenant
# - conn_name: p2p1
# ifname: p2p1
# master: storage
# - conn_name: p2p2
# ifname: p2p2
# master: external
#
# #bond vars
# nmcli_bond:
# - conn_name: tenant
# ip4: '{{ tenant_ip }}'
# gw4: ''
# mode: balance-rr
# - conn_name: external
# ip4: '{{ external_ip }}'
# gw4: ''
# mode: balance-rr
# - conn_name: storage
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# mode: balance-rr
# nmcli_bond_slave:
# - conn_name: em1
# ifname: em1
# master: tenant
# - conn_name: em2
# ifname: em2
# master: tenant
# - conn_name: p2p1
# ifname: p2p1
# master: storage
# - conn_name: p2p2
# ifname: p2p2
# master: external
#
# #ethernet vars
# nmcli_ethernet:
# - conn_name: em1
# ifname: em1
# ip4: '{{ tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: em2
# ifname: em2
# ip4: '{{ tenant_ip1 }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: p2p1
# ifname: p2p1
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# - conn_name: p2p2
# ifname: p2p2
# ip4: '{{ external_ip }}'
# gw4: '{{ external_gw }}'
# ```
#
# ### host_vars
# ```yml
# ---
# storage_ip: "192.0.2.91/23"
# external_ip: "198.51.100.23/21"
# tenant_ip: "203.0.113.77/23"
# ```
## playbook-add.yml example
---
- hosts: openstack-stage
remote_user: root
tasks:
- name: install needed network manager libs
package:
name:
- NetworkManager-glib
- nm-connection-editor
- libsemanage-python
- policycoreutils-python
state: present
##### Working with all cloud nodes - Teaming
- name: Try nmcli add team - conn_name only & ip4 gw4
nmcli:
type: team
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
state: present
with_items:
- '{{ nmcli_team }}'
- name: Try nmcli add teams-slave
nmcli:
type: team-slave
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
master: '{{ item.master }}'
state: present
with_items:
- '{{ nmcli_team_slave }}'
###### Working with all cloud nodes - Bonding
- name: Try nmcli add bond - conn_name only & ip4 gw4 mode
nmcli:
type: bond
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
mode: '{{ item.mode }}'
state: present
with_items:
- '{{ nmcli_bond }}'
- name: Try nmcli add bond-slave
nmcli:
type: bond-slave
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
master: '{{ item.master }}'
state: present
with_items:
- '{{ nmcli_bond_slave }}'
##### Working with all cloud nodes - Ethernet
- name: Try nmcli add Ethernet - conn_name only & ip4 gw4
nmcli:
type: ethernet
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
state: present
with_items:
- '{{ nmcli_ethernet }}'
## playbook-del.yml example
- hosts: openstack-stage
remote_user: root
tasks:
- name: Try nmcli del team - multiple
nmcli:
conn_name: '{{ item.conn_name }}'
state: absent
with_items:
- conn_name: em1
- conn_name: em2
- conn_name: p1p1
- conn_name: p1p2
- conn_name: p2p1
- conn_name: p2p2
- conn_name: tenant
- conn_name: storage
- conn_name: external
- conn_name: team-em1
- conn_name: team-em2
- conn_name: team-p1p1
- conn_name: team-p1p2
- conn_name: team-p2p1
- conn_name: team-p2p2
- name: Add an Ethernet connection with static IP configuration
nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4: 192.0.2.100/24
gw4: 192.0.2.1
state: present
- name: Add an Team connection with static IP configuration
nmcli:
conn_name: my-team1
ifname: my-team1
type: team
ip4: 192.0.2.100/24
gw4: 192.0.2.1
state: present
autoconnect: yes
- name: Optionally, at the same time specify IPv6 addresses for the device
nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4: 192.0.2.100/24
gw4: 192.0.2.1
ip6: 2001:db8::cafe
gw6: 2001:db8::1
state: present
- name: Add two IPv4 DNS server addresses
nmcli:
conn_name: my-eth1
type: ethernet
dns4:
- 192.0.2.53
- 198.51.100.53
state: present
- name: Make a profile usable for all compatible Ethernet interfaces
nmcli:
ctype: ethernet
name: my-eth1
ifname: '*'
state: present
- name: Change the property of a setting e.g. MTU
nmcli:
conn_name: my-eth1
mtu: 9000
type: ethernet
state: present
- name: Add VxLan
nmcli:
type: vxlan
conn_name: vxlan_test1
vxlan_id: 16
vxlan_local: 192.168.1.2
vxlan_remote: 192.168.1.5
- name: Add ipip
nmcli:
type: ipip
conn_name: ipip_test1
ip_tunnel_dev: eth0
ip_tunnel_local: 192.168.1.2
ip_tunnel_remote: 192.168.1.5
- name: Add sit
nmcli:
type: sit
conn_name: sit_test1
ip_tunnel_dev: eth0
ip_tunnel_local: 192.168.1.2
ip_tunnel_remote: 192.168.1.5
# nmcli exits with status 0 if it succeeds and exits with a status greater
# than zero when there is a failure. The following list of status codes may be
# returned:
#
# - 0 Success - indicates the operation succeeded
# - 1 Unknown or unspecified error
# - 2 Invalid user input, wrong nmcli invocation
# - 3 Timeout expired (see --wait option)
# - 4 Connection activation failed
# - 5 Connection deactivation failed
# - 6 Disconnecting device failed
# - 7 Connection deletion failed
# - 8 NetworkManager is not running
# - 9 nmcli and NetworkManager versions mismatch
# - 10 Connection, device, or access point does not exist.
'''
RETURN = r"""#
"""
import traceback
DBUS_IMP_ERR = None
try:
import dbus
HAVE_DBUS = True
except ImportError:
DBUS_IMP_ERR = traceback.format_exc()
HAVE_DBUS = False
NM_CLIENT_IMP_ERR = None
try:
import gi
gi.require_version('NMClient', '1.0')
gi.require_version('NetworkManager', '1.0')
from gi.repository import NetworkManager, NMClient
HAVE_NM_CLIENT = True
except (ImportError, ValueError):
NM_CLIENT_IMP_ERR = traceback.format_exc()
HAVE_NM_CLIENT = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
class Nmcli(object):
"""
This is the generic nmcli manipulation class that is subclassed based on platform.
A subclass may wish to override the following action methods:-
- create_connection()
- delete_connection()
- modify_connection()
- show_connection()
- up_connection()
- down_connection()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
if HAVE_DBUS:
bus = dbus.SystemBus()
# The following is going to be used in dbus code
DEVTYPES = {
1: "Ethernet",
2: "Wi-Fi",
5: "Bluetooth",
6: "OLPC",
7: "WiMAX",
8: "Modem",
9: "InfiniBand",
10: "Bond",
11: "VLAN",
12: "ADSL",
13: "Bridge",
14: "Generic",
15: "Team",
16: "VxLan",
17: "ipip",
18: "sit",
}
STATES = {
0: "Unknown",
10: "Unmanaged",
20: "Unavailable",
30: "Disconnected",
40: "Prepare",
50: "Config",
60: "Need Auth",
70: "IP Config",
80: "IP Check",
90: "Secondaries",
100: "Activated",
110: "Deactivating",
120: "Failed"
}
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.autoconnect = module.params['autoconnect']
self.conn_name = module.params['conn_name']
self.master = module.params['master']
self.ifname = module.params['ifname']
self.type = module.params['type']
self.ip4 = module.params['ip4']
self.gw4 = module.params['gw4']
self.dns4 = ' '.join(module.params['dns4']) if module.params.get('dns4') else None
self.dns4_search = ' '.join(module.params['dns4_search']) if module.params.get('dns4_search') else None
self.ip6 = module.params['ip6']
self.gw6 = module.params['gw6']
self.dns6 = ' '.join(module.params['dns6']) if module.params.get('dns6') else None
self.dns6_search = ' '.join(module.params['dns6_search']) if module.params.get('dns6_search') else None
self.mtu = module.params['mtu']
self.stp = module.params['stp']
self.priority = module.params['priority']
self.mode = module.params['mode']
self.miimon = module.params['miimon']
self.primary = module.params['primary']
self.downdelay = module.params['downdelay']
self.updelay = module.params['updelay']
self.arp_interval = module.params['arp_interval']
self.arp_ip_target = module.params['arp_ip_target']
self.slavepriority = module.params['slavepriority']
self.forwarddelay = module.params['forwarddelay']
self.hellotime = module.params['hellotime']
self.maxage = module.params['maxage']
self.ageingtime = module.params['ageingtime']
self.hairpin = module.params['hairpin']
self.path_cost = module.params['path_cost']
self.mac = module.params['mac']
self.vlanid = module.params['vlanid']
self.vlandev = module.params['vlandev']
self.flags = module.params['flags']
self.ingress = module.params['ingress']
self.egress = module.params['egress']
self.vxlan_id = module.params['vxlan_id']
self.vxlan_local = module.params['vxlan_local']
self.vxlan_remote = module.params['vxlan_remote']
self.ip_tunnel_dev = module.params['ip_tunnel_dev']
self.ip_tunnel_local = module.params['ip_tunnel_local']
self.ip_tunnel_remote = module.params['ip_tunnel_remote']
self.nmcli_bin = self.module.get_bin_path('nmcli', True)
self.dhcp_client_id = module.params['dhcp_client_id']
def execute_command(self, cmd, use_unsafe_shell=False, data=None):
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def merge_secrets(self, proxy, config, setting_name):
try:
# returns a dict of dicts mapping name::setting, where setting is a dict
# mapping key::value. Each member of the 'setting' dict is a secret
secrets = proxy.GetSecrets(setting_name)
# Copy the secrets into our connection config
for setting in secrets:
for key in secrets[setting]:
config[setting_name][key] = secrets[setting][key]
except Exception:
pass
def dict_to_string(self, d):
# Try to trivially translate a dictionary's elements into nice string
# formatting.
dstr = ""
for key in d:
val = d[key]
str_val = ""
add_string = True
if isinstance(val, dbus.Array):
for elt in val:
if isinstance(elt, dbus.Byte):
str_val += "%s " % int(elt)
elif isinstance(elt, dbus.String):
str_val += "%s" % elt
elif isinstance(val, dbus.Dictionary):
dstr += self.dict_to_string(val)
add_string = False
else:
str_val = val
if add_string:
dstr += "%s: %s\n" % (key, str_val)
return dstr
def connection_to_string(self, config):
# dump a connection configuration to use in list_connection_info
setting_list = []
for setting_name in config:
setting_list.append(self.dict_to_string(config[setting_name]))
return setting_list
@staticmethod
def bool_to_string(boolean):
if boolean:
return "yes"
else:
return "no"
def list_connection_info(self):
# Ask the settings service for the list of connections it provides
bus = dbus.SystemBus()
service_name = "org.freedesktop.NetworkManager"
settings = None
try:
proxy = bus.get_object(service_name, "/org/freedesktop/NetworkManager/Settings")
settings = dbus.Interface(proxy, "org.freedesktop.NetworkManager.Settings")
except dbus.exceptions.DBusException as e:
self.module.fail_json(msg="Unable to read Network Manager settings from DBus system bus: %s" % to_native(e),
details="Please check if NetworkManager is installed and"
"service network-manager is started.")
connection_paths = settings.ListConnections()
connection_list = []
# List each connection's name, UUID, and type
for path in connection_paths:
con_proxy = bus.get_object(service_name, path)
settings_connection = dbus.Interface(con_proxy, "org.freedesktop.NetworkManager.Settings.Connection")
config = settings_connection.GetSettings()
# Now get secrets too; we grab the secrets for each type of connection
# (since there isn't a "get all secrets" call because most of the time
# you only need 'wifi' secrets or '802.1x' secrets, not everything) and
# merge that into the configuration data - To use at a later stage
self.merge_secrets(settings_connection, config, '802-11-wireless')
self.merge_secrets(settings_connection, config, '802-11-wireless-security')
self.merge_secrets(settings_connection, config, '802-1x')
self.merge_secrets(settings_connection, config, 'gsm')
self.merge_secrets(settings_connection, config, 'cdma')
self.merge_secrets(settings_connection, config, 'ppp')
# Get the details of the 'connection' setting
s_con = config['connection']
connection_list.append(s_con['id'])
connection_list.append(s_con['uuid'])
connection_list.append(s_con['type'])
connection_list.append(self.connection_to_string(config))
return connection_list
def connection_exists(self):
# we are going to use name and type in this instance to find if that connection exists and is of type x
connections = self.list_connection_info()
for con_item in connections:
if self.conn_name == con_item:
return True
def down_connection(self):
cmd = [self.nmcli_bin, 'con', 'down', self.conn_name]
return self.execute_command(cmd)
def up_connection(self):
cmd = [self.nmcli_bin, 'con', 'up', self.conn_name]
return self.execute_command(cmd)
def create_connection_team(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'team', 'con-name']
# format for creating team interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_team(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_team_slave(self):
cmd = [self.nmcli_bin, 'connection', 'add', 'type', self.type, 'con-name']
# format for creating team-slave interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
cmd.append('master')
if self.conn_name is not None:
cmd.append(self.master)
return cmd
def modify_connection_team_slave(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name, 'connection.master', self.master]
# format for modifying team-slave interface
if self.mtu is not None:
cmd.append('802-3-ethernet.mtu')
cmd.append(self.mtu)
return cmd
def create_connection_bond(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bond', 'con-name']
# format for creating bond interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'mode': self.mode,
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'miimon': self.miimon,
'downdelay': self.downdelay,
'updelay': self.updelay,
'arp-interval': self.arp_interval,
'arp-ip-target': self.arp_ip_target,
'primary': self.primary,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bond(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
# format for modifying bond interface
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'miimon': self.miimon,
'downdelay': self.downdelay,
'updelay': self.updelay,
'arp-interval': self.arp_interval,
'arp-ip-target': self.arp_ip_target,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_bond_slave(self):
cmd = [self.nmcli_bin, 'connection', 'add', 'type', 'bond-slave', 'con-name']
# format for creating bond-slave interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
cmd.append('master')
if self.conn_name is not None:
cmd.append(self.master)
return cmd
def modify_connection_bond_slave(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name, 'connection.master', self.master]
# format for modifying bond-slave interface
return cmd
def create_connection_ethernet(self, conn_type='ethernet'):
# format for creating ethernet interface
# To add an Ethernet connection with static IP configuration, issue a command as follows
# - nmcli: name=add conn_name=my-eth1 ifname=eth1 type=ethernet ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con add con-name my-eth1 ifname eth1 type ethernet ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'add', 'type']
if conn_type == 'ethernet':
cmd.append('ethernet')
elif conn_type == 'generic':
cmd.append('generic')
cmd.append('con-name')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_ethernet(self, conn_type='ethernet'):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
# format for modifying ethernet interface
# To modify an Ethernet connection with static IP configuration, issue a command as follows
# - nmcli: conn_name=my-eth1 ifname=eth1 type=ethernet ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con mod con-name my-eth1 ifname eth1 type ethernet ipv4.address 192.0.2.100/24 ipv4.gateway 192.0.2.1
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'802-3-ethernet.mtu': self.mtu,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
if key == '802-3-ethernet.mtu' and conn_type != 'ethernet':
continue
cmd.extend([key, value])
return cmd
def create_connection_bridge(self):
# format for creating bridge interface
# To add an Bridge connection with static IP configuration, issue a command as follows
# - nmcli: name=add conn_name=my-eth1 ifname=eth1 type=bridge ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con add con-name my-eth1 ifname eth1 type bridge ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bridge', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'bridge.ageing-time': self.ageingtime,
'bridge.forward-delay': self.forwarddelay,
'bridge.hello-time': self.hellotime,
'bridge.mac-address': self.mac,
'bridge.max-age': self.maxage,
'bridge.priority': self.priority,
'bridge.stp': self.bool_to_string(self.stp)
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bridge(self):
# format for modifying bridge interface
# To add an Bridge connection with static IP configuration, issue a command as follows
# - nmcli: name=mod conn_name=my-eth1 ifname=eth1 type=bridge ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con mod my-eth1 ifname eth1 type bridge ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'bridge.ageing-time': self.ageingtime,
'bridge.forward-delay': self.forwarddelay,
'bridge.hello-time': self.hellotime,
'bridge.mac-address': self.mac,
'bridge.max-age': self.maxage,
'bridge.priority': self.priority,
'bridge.stp': self.bool_to_string(self.stp)
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_bridge_slave(self):
# format for creating bond-slave interface
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bridge-slave', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'master': self.master,
'bridge-port.path-cost': self.path_cost,
'bridge-port.hairpin': self.bool_to_string(self.hairpin),
'bridge-port.priority': self.slavepriority,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bridge_slave(self):
# format for modifying bond-slave interface
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'master': self.master,
'bridge-port.path-cost': self.path_cost,
'bridge-port.hairpin': self.bool_to_string(self.hairpin),
'bridge-port.priority': self.slavepriority,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_vlan(self):
cmd = [self.nmcli_bin]
cmd.append('con')
cmd.append('add')
cmd.append('type')
cmd.append('vlan')
cmd.append('con-name')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vlan%s' % self.vlanid)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('vlan%s' % self.vlanid)
params = {'dev': self.vlandev,
'id': str(self.vlanid),
'ip4': self.ip4 or '',
'gw4': self.gw4 or '',
'ip6': self.ip6 or '',
'gw6': self.gw6 or '',
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_vlan(self):
cmd = [self.nmcli_bin]
cmd.append('con')
cmd.append('mod')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vlan%s' % self.vlanid)
params = {'vlan.parent': self.vlandev,
'vlan.id': str(self.vlanid),
'ipv4.address': self.ip4 or '',
'ipv4.gateway': self.gw4 or '',
'ipv4.dns': self.dns4 or '',
'ipv6.address': self.ip6 or '',
'ipv6.gateway': self.gw6 or '',
'ipv6.dns': self.dns6 or '',
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_vxlan(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'vxlan', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vxlan%s' % self.vxlanid)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('vxan%s' % self.vxlanid)
params = {'vxlan.id': self.vxlan_id,
'vxlan.local': self.vxlan_local,
'vxlan.remote': self.vxlan_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_vxlan(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vxlan%s' % self.vxlanid)
params = {'vxlan.id': self.vxlan_id,
'vxlan.local': self.vxlan_local,
'vxlan.remote': self.vxlan_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_ipip(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'ip-tunnel', 'mode', 'ipip', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('ipip%s' % self.ip_tunnel_dev)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('ipip%s' % self.ipip_dev)
if self.ip_tunnel_dev is not None:
cmd.append('dev')
cmd.append(self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_ipip(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('ipip%s' % self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_sit(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'ip-tunnel', 'mode', 'sit', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('sit%s' % self.ip_tunnel_dev)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('sit%s' % self.ipip_dev)
if self.ip_tunnel_dev is not None:
cmd.append('dev')
cmd.append(self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_sit(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('sit%s' % self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection(self):
cmd = []
if self.type == 'team':
if (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_team()
self.execute_command(cmd)
cmd = self.modify_connection_team()
self.execute_command(cmd)
return self.up_connection()
elif (self.dns4 is None) or (self.dns6 is None):
cmd = self.create_connection_team()
elif self.type == 'team-slave':
if self.mtu is not None:
cmd = self.create_connection_team_slave()
self.execute_command(cmd)
cmd = self.modify_connection_team_slave()
return self.execute_command(cmd)
else:
cmd = self.create_connection_team_slave()
elif self.type == 'bond':
if (self.mtu is not None) or (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_bond()
self.execute_command(cmd)
cmd = self.modify_connection_bond()
self.execute_command(cmd)
return self.up_connection()
else:
cmd = self.create_connection_bond()
elif self.type == 'bond-slave':
cmd = self.create_connection_bond_slave()
elif self.type == 'ethernet':
if (self.mtu is not None) or (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_ethernet()
self.execute_command(cmd)
cmd = self.modify_connection_ethernet()
self.execute_command(cmd)
return self.up_connection()
else:
cmd = self.create_connection_ethernet()
elif self.type == 'bridge':
cmd = self.create_connection_bridge()
elif self.type == 'bridge-slave':
cmd = self.create_connection_bridge_slave()
elif self.type == 'vlan':
cmd = self.create_connection_vlan()
elif self.type == 'vxlan':
cmd = self.create_connection_vxlan()
elif self.type == 'ipip':
cmd = self.create_connection_ipip()
elif self.type == 'sit':
cmd = self.create_connection_sit()
elif self.type == 'generic':
cmd = self.create_connection_ethernet(conn_type='generic')
if cmd:
return self.execute_command(cmd)
else:
self.module.fail_json(msg="Type of device or network connection is required "
"while performing 'create' operation. Please specify 'type' as an argument.")
def remove_connection(self):
# self.down_connection()
cmd = [self.nmcli_bin, 'con', 'del', self.conn_name]
return self.execute_command(cmd)
def modify_connection(self):
cmd = []
if self.type == 'team':
cmd = self.modify_connection_team()
elif self.type == 'team-slave':
cmd = self.modify_connection_team_slave()
elif self.type == 'bond':
cmd = self.modify_connection_bond()
elif self.type == 'bond-slave':
cmd = self.modify_connection_bond_slave()
elif self.type == 'ethernet':
cmd = self.modify_connection_ethernet()
elif self.type == 'bridge':
cmd = self.modify_connection_bridge()
elif self.type == 'bridge-slave':
cmd = self.modify_connection_bridge_slave()
elif self.type == 'vlan':
cmd = self.modify_connection_vlan()
elif self.type == 'vxlan':
cmd = self.modify_connection_vxlan()
elif self.type == 'ipip':
cmd = self.modify_connection_ipip()
elif self.type == 'sit':
cmd = self.modify_connection_sit()
elif self.type == 'generic':
cmd = self.modify_connection_ethernet(conn_type='generic')
if cmd:
return self.execute_command(cmd)
else:
self.module.fail_json(msg="Type of device or network connection is required "
"while performing 'modify' operation. Please specify 'type' as an argument.")
def main():
# Parsing argument file
module = AnsibleModule(
argument_spec=dict(
autoconnect=dict(type='bool', default=True),
state=dict(type='str', required=True, choices=['absent', 'present']),
conn_name=dict(type='str', required=True),
master=dict(type='str'),
ifname=dict(type='str'),
type=dict(type='str',
choices=['bond', 'bond-slave', 'bridge', 'bridge-slave', 'ethernet', 'generic', 'ipip', 'sit', 'team', 'team-slave', 'vlan', 'vxlan']),
ip4=dict(type='str'),
gw4=dict(type='str'),
dns4=dict(type='list'),
dns4_search=dict(type='list'),
dhcp_client_id=dict(type='str'),
ip6=dict(type='str'),
gw6=dict(type='str'),
dns6=dict(type='list'),
dns6_search=dict(type='list'),
# Bond Specific vars
mode=dict(type='str', default='balance-rr',
choices=['802.3ad', 'active-backup', 'balance-alb', 'balance-rr', 'balance-tlb', 'balance-xor', 'broadcast']),
miimon=dict(type='int'),
downdelay=dict(type='int'),
updelay=dict(type='int'),
arp_interval=dict(type='int'),
arp_ip_target=dict(type='str'),
primary=dict(type='str'),
# general usage
mtu=dict(type='int'),
mac=dict(type='str'),
# bridge specific vars
stp=dict(type='bool', default=True),
priority=dict(type='int', default=128),
slavepriority=dict(type='int', default=32),
forwarddelay=dict(type='int', default=15),
hellotime=dict(type='int', default=2),
maxage=dict(type='int', default=20),
ageingtime=dict(type='int', default=300),
hairpin=dict(type='bool', default=True),
path_cost=dict(type='int', default=100),
# vlan specific vars
vlanid=dict(type='int'),
vlandev=dict(type='str'),
flags=dict(type='str'),
ingress=dict(type='str'),
egress=dict(type='str'),
# vxlan specific vars
vxlan_id=dict(type='int'),
vxlan_local=dict(type='str'),
vxlan_remote=dict(type='str'),
# ip-tunnel specific vars
ip_tunnel_dev=dict(type='str'),
ip_tunnel_local=dict(type='str'),
ip_tunnel_remote=dict(type='str'),
),
supports_check_mode=True,
)
if not HAVE_DBUS:
module.fail_json(msg=missing_required_lib('dbus'), exception=DBUS_IMP_ERR)
if not HAVE_NM_CLIENT:
module.fail_json(msg=missing_required_lib('NetworkManager glib API'), exception=NM_CLIENT_IMP_ERR)
nmcli = Nmcli(module)
(rc, out, err) = (None, '', '')
result = {'conn_name': nmcli.conn_name, 'state': nmcli.state}
# check for issues
if nmcli.conn_name is None:
nmcli.module.fail_json(msg="Please specify a name for the connection")
# team-slave checks
if nmcli.type == 'team-slave' and nmcli.master is None:
nmcli.module.fail_json(msg="Please specify a name for the master")
if nmcli.type == 'team-slave' and nmcli.ifname is None:
nmcli.module.fail_json(msg="Please specify an interface name for the connection")
if nmcli.state == 'absent':
if nmcli.connection_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.down_connection()
(rc, out, err) = nmcli.remove_connection()
if rc != 0:
module.fail_json(name=('No Connection named %s exists' % nmcli.conn_name), msg=err, rc=rc)
elif nmcli.state == 'present':
if nmcli.connection_exists():
# modify connection (note: this function is check mode aware)
# result['Connection']=('Connection %s of Type %s is not being added' % (nmcli.conn_name, nmcli.type))
result['Exists'] = 'Connections do exist so we are modifying them'
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.modify_connection()
if not nmcli.connection_exists():
result['Connection'] = ('Connection %s of Type %s is being added' % (nmcli.conn_name, nmcli.type))
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.create_connection()
if rc is not None and rc != 0:
module.fail_json(name=nmcli.conn_name, msg=err, rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,646 |
nmcli not working on Centos8: `No package NetworkManager-glib available`
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The [docu](https://docs.ansible.com/ansible/latest/modules/nmcli_module.html) lists some requirements to be installed:
- NetworkManager-glib
- libnm-qt-devel.x86_64
- nm-connection-editor.x86_64-
- libsemanage-python
- policycoreutils-python
however there is no package `NetworkManager-glib` available in `CentOS 8` and it is not clear how to install the required libraries.
```bash
fatal: [node002]: FAILED! => {"changed": false, "failures": ["No package NetworkManager-glib available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
There seem only `NetworkManager-libnm.x86_64 : Libraries for adding NetworkManager support to applications (new API).` available, but installing this package will end up in
```bash
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: Namespace NMClient not available
fatal: [node001]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (NetworkManager glib API) on node001's Python /usr/bin/python3. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
component `nmcli`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.0
config file = /home/aedu/ansible.cfg
configured module search path = ['/home/aedu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_HOST_LIST(/home/aedu/ansible.cfg) = ['/home/aedu/inventory.yml']
DEFAULT_REMOTE_USER(/home/aedu/ansible.cfg) = ansible
DEFAULT_ROLES_PATH(/home/aedu/ansible.cfg) = ['/home/aedu/roles']
DEFAULT_TIMEOUT(/home/aedu/ansible.cfg) = 10
DEFAULT_VAULT_PASSWORD_FILE(/home/aedu/ansible.cfg) = /home/aedu/.ssh/wyssmann
RETRY_FILES_SAVE_PATH(/home/aedu/ansible.cfg) = /home/aedu/.ansible-retry
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, networkNetworkManager-libnm.x86_64 : Libraries for adding NetworkManager support to applications (new API).
device firmware, etc. -->
```bash
# uname -a
Linux node002 4.18.0-80.11.2.el8_0.x86_64 #1 SMP Tue Sep 24 11:32:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/os-release
NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
As using `python3` the packages are slighty different I've updated these
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: install needed network manager libs
package:
name:
- NetworkManager-libnm
- nm-connection-editor
- python3-libsemanage
- python3-policycoreutils
state: present
- name: Configure {{ iface.name }}
nmcli:
conn_name: '{{ iface.name }}'
ifname: '{{ iface.name }}'
type: ethernet
ip4: '{{ ipv4.address }}'
gw4: '{{ ipv4.gateway }}'
ip6: '{{ ipv6.address }}'
gw6: '{{ ipv6.gateway }}'
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nmcli works fine
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [node002]: FAILED! => {"changed": false, "failures": ["No package NetworkManager-glib available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
|
https://github.com/ansible/ansible/issues/64646
|
https://github.com/ansible/ansible/pull/65726
|
7ee3103a86f0179faead92d2481ff175d9a69747
|
663171e21820f1696b93c3f182626b0fd006b61b
| 2019-11-10T12:08:15Z |
python
| 2019-12-21T05:58:58Z |
lib/ansible/modules/net_tools/nmcli.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Chris Long <[email protected]> <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: nmcli
author:
- Chris Long (@alcamie101)
short_description: Manage Networking
requirements:
- dbus
- NetworkManager-glib
- nmcli
version_added: "2.0"
description:
- Manage the network devices. Create, modify and manage various connection and device type e.g., ethernet, teams, bonds, vlans etc.
- 'On CentOS and Fedora like systems, the requirements can be met by installing the following packages: NetworkManager-glib,
libnm-qt-devel.x86_64, nm-connection-editor.x86_64, libsemanage-python, policycoreutils-python.'
- 'On Ubuntu and Debian like systems, the requirements can be met by installing the following packages: network-manager,
python-dbus (or python3-dbus, depending on the Python version in use), libnm-glib-dev.'
- 'On openSUSE, the requirements can be met by installing the following packages: NetworkManager, python2-dbus-python (or
python3-dbus-python), typelib-1_0-NMClient-1_0 and typelib-1_0-NetworkManager-1_0.'
options:
state:
description:
- Whether the device should exist or not, taking action if the state is different from what is stated.
type: str
required: true
choices: [ absent, present ]
autoconnect:
description:
- Whether the connection should start on boot.
- Whether the connection profile can be automatically activated
type: bool
default: yes
conn_name:
description:
- The name used to call the connection. Pattern is <type>[-<ifname>][-<num>].
type: str
required: true
ifname:
description:
- The interface to bind the connection to.
- The connection will only be applicable to this interface name.
- A special value of C('*') can be used for interface-independent connections.
- The ifname argument is mandatory for all connection types except bond, team, bridge and vlan.
- This parameter defaults to C(conn_name) when left unset.
type: str
type:
description:
- This is the type of device or network connection that you wish to create or modify.
- Type C(generic) is added in Ansible 2.5.
type: str
choices: [ bond, bond-slave, bridge, bridge-slave, ethernet, generic, ipip, sit, team, team-slave, vlan, vxlan ]
mode:
description:
- This is the type of device or network connection that you wish to create for a bond, team or bridge.
type: str
choices: [ 802.3ad, active-backup, balance-alb, balance-rr, balance-tlb, balance-xor, broadcast ]
default: balance-rr
master:
description:
- Master <master (ifname, or connection UUID or conn_name) of bridge, team, bond master connection profile.
type: str
ip4:
description:
- The IPv4 address to this interface.
- Use the format C(192.0.2.24/24).
type: str
gw4:
description:
- The IPv4 gateway for this interface.
- Use the format C(192.0.2.1).
type: str
dns4:
description:
- A list of up to 3 dns servers.
- IPv4 format e.g. to add two IPv4 DNS server addresses, use C(192.0.2.53 198.51.100.53).
type: list
dns4_search:
description:
- A list of DNS search domains.
type: list
version_added: '2.5'
ip6:
description:
- The IPv6 address to this interface.
- Use the format C(abbe::cafe).
type: str
gw6:
description:
- The IPv6 gateway for this interface.
- Use the format C(2001:db8::1).
type: str
dns6:
description:
- A list of up to 3 dns servers.
- IPv6 format e.g. to add two IPv6 DNS server addresses, use C(2001:4860:4860::8888 2001:4860:4860::8844).
type: list
dns6_search:
description:
- A list of DNS search domains.
type: list
version_added: '2.5'
mtu:
description:
- The connection MTU, e.g. 9000. This can't be applied when creating the interface and is done once the interface has been created.
- Can be used when modifying Team, VLAN, Ethernet (Future plans to implement wifi, pppoe, infiniband)
- This parameter defaults to C(1500) when unset.
type: int
dhcp_client_id:
description:
- DHCP Client Identifier sent to the DHCP server.
type: str
version_added: "2.5"
primary:
description:
- This is only used with bond and is the primary interface name (for "active-backup" mode), this is the usually the 'ifname'.
type: str
miimon:
description:
- This is only used with bond - miimon.
- This parameter defaults to C(100) when unset.
type: int
downdelay:
description:
- This is only used with bond - downdelay.
type: int
updelay:
description:
- This is only used with bond - updelay.
type: int
arp_interval:
description:
- This is only used with bond - ARP interval.
type: int
arp_ip_target:
description:
- This is only used with bond - ARP IP target.
type: str
stp:
description:
- This is only used with bridge and controls whether Spanning Tree Protocol (STP) is enabled for this bridge.
type: bool
default: yes
priority:
description:
- This is only used with 'bridge' - sets STP priority.
type: int
default: 128
forwarddelay:
description:
- This is only used with bridge - [forward-delay <2-30>] STP forwarding delay, in seconds.
type: int
default: 15
hellotime:
description:
- This is only used with bridge - [hello-time <1-10>] STP hello time, in seconds.
type: int
default: 2
maxage:
description:
- This is only used with bridge - [max-age <6-42>] STP maximum message age, in seconds.
type: int
default: 20
ageingtime:
description:
- This is only used with bridge - [ageing-time <0-1000000>] the Ethernet MAC address aging time, in seconds.
type: int
default: 300
mac:
description:
- This is only used with bridge - MAC address of the bridge.
- Note this requires a recent kernel feature, originally introduced in 3.15 upstream kernel.
slavepriority:
description:
- This is only used with 'bridge-slave' - [<0-63>] - STP priority of this slave.
type: int
default: 32
path_cost:
description:
- This is only used with 'bridge-slave' - [<1-65535>] - STP port cost for destinations via this slave.
type: int
default: 100
hairpin:
description:
- This is only used with 'bridge-slave' - 'hairpin mode' for the slave, which allows frames to be sent back out through the slave the
frame was received on.
type: bool
default: yes
vlanid:
description:
- This is only used with VLAN - VLAN ID in range <0-4095>.
type: int
vlandev:
description:
- This is only used with VLAN - parent device this VLAN is on, can use ifname.
type: str
flags:
description:
- This is only used with VLAN - flags.
type: str
ingress:
description:
- This is only used with VLAN - VLAN ingress priority mapping.
type: str
egress:
description:
- This is only used with VLAN - VLAN egress priority mapping.
type: str
vxlan_id:
description:
- This is only used with VXLAN - VXLAN ID.
type: int
version_added: "2.8"
vxlan_remote:
description:
- This is only used with VXLAN - VXLAN destination IP address.
type: str
version_added: "2.8"
vxlan_local:
description:
- This is only used with VXLAN - VXLAN local IP address.
type: str
version_added: "2.8"
ip_tunnel_dev:
description:
- This is used with IPIP/SIT - parent device this IPIP/SIT tunnel, can use ifname.
type: str
version_added: "2.8"
ip_tunnel_remote:
description:
- This is used with IPIP/SIT - IPIP/SIT destination IP address.
type: str
version_added: "2.8"
ip_tunnel_local:
description:
- This is used with IPIP/SIT - IPIP/SIT local IP address.
type: str
version_added: "2.8"
'''
EXAMPLES = r'''
# These examples are using the following inventory:
#
# ## Directory layout:
#
# |_/inventory/cloud-hosts
# | /group_vars/openstack-stage.yml
# | /host_vars/controller-01.openstack.host.com
# | /host_vars/controller-02.openstack.host.com
# |_/playbook/library/nmcli.py
# | /playbook-add.yml
# | /playbook-del.yml
# ```
#
# ## inventory examples
# ### groups_vars
# ```yml
# ---
# #devops_os_define_network
# storage_gw: "192.0.2.254"
# external_gw: "198.51.100.254"
# tenant_gw: "203.0.113.254"
#
# #Team vars
# nmcli_team:
# - conn_name: tenant
# ip4: '{{ tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: external
# ip4: '{{ external_ip }}'
# gw4: '{{ external_gw }}'
# - conn_name: storage
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# nmcli_team_slave:
# - conn_name: em1
# ifname: em1
# master: tenant
# - conn_name: em2
# ifname: em2
# master: tenant
# - conn_name: p2p1
# ifname: p2p1
# master: storage
# - conn_name: p2p2
# ifname: p2p2
# master: external
#
# #bond vars
# nmcli_bond:
# - conn_name: tenant
# ip4: '{{ tenant_ip }}'
# gw4: ''
# mode: balance-rr
# - conn_name: external
# ip4: '{{ external_ip }}'
# gw4: ''
# mode: balance-rr
# - conn_name: storage
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# mode: balance-rr
# nmcli_bond_slave:
# - conn_name: em1
# ifname: em1
# master: tenant
# - conn_name: em2
# ifname: em2
# master: tenant
# - conn_name: p2p1
# ifname: p2p1
# master: storage
# - conn_name: p2p2
# ifname: p2p2
# master: external
#
# #ethernet vars
# nmcli_ethernet:
# - conn_name: em1
# ifname: em1
# ip4: '{{ tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: em2
# ifname: em2
# ip4: '{{ tenant_ip1 }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: p2p1
# ifname: p2p1
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# - conn_name: p2p2
# ifname: p2p2
# ip4: '{{ external_ip }}'
# gw4: '{{ external_gw }}'
# ```
#
# ### host_vars
# ```yml
# ---
# storage_ip: "192.0.2.91/23"
# external_ip: "198.51.100.23/21"
# tenant_ip: "203.0.113.77/23"
# ```
## playbook-add.yml example
---
- hosts: openstack-stage
remote_user: root
tasks:
- name: install needed network manager libs
package:
name:
- NetworkManager-glib
- nm-connection-editor
- libsemanage-python
- policycoreutils-python
state: present
##### Working with all cloud nodes - Teaming
- name: Try nmcli add team - conn_name only & ip4 gw4
nmcli:
type: team
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
state: present
with_items:
- '{{ nmcli_team }}'
- name: Try nmcli add teams-slave
nmcli:
type: team-slave
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
master: '{{ item.master }}'
state: present
with_items:
- '{{ nmcli_team_slave }}'
###### Working with all cloud nodes - Bonding
- name: Try nmcli add bond - conn_name only & ip4 gw4 mode
nmcli:
type: bond
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
mode: '{{ item.mode }}'
state: present
with_items:
- '{{ nmcli_bond }}'
- name: Try nmcli add bond-slave
nmcli:
type: bond-slave
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
master: '{{ item.master }}'
state: present
with_items:
- '{{ nmcli_bond_slave }}'
##### Working with all cloud nodes - Ethernet
- name: Try nmcli add Ethernet - conn_name only & ip4 gw4
nmcli:
type: ethernet
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
state: present
with_items:
- '{{ nmcli_ethernet }}'
## playbook-del.yml example
- hosts: openstack-stage
remote_user: root
tasks:
- name: Try nmcli del team - multiple
nmcli:
conn_name: '{{ item.conn_name }}'
state: absent
with_items:
- conn_name: em1
- conn_name: em2
- conn_name: p1p1
- conn_name: p1p2
- conn_name: p2p1
- conn_name: p2p2
- conn_name: tenant
- conn_name: storage
- conn_name: external
- conn_name: team-em1
- conn_name: team-em2
- conn_name: team-p1p1
- conn_name: team-p1p2
- conn_name: team-p2p1
- conn_name: team-p2p2
- name: Add an Ethernet connection with static IP configuration
nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4: 192.0.2.100/24
gw4: 192.0.2.1
state: present
- name: Add an Team connection with static IP configuration
nmcli:
conn_name: my-team1
ifname: my-team1
type: team
ip4: 192.0.2.100/24
gw4: 192.0.2.1
state: present
autoconnect: yes
- name: Optionally, at the same time specify IPv6 addresses for the device
nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4: 192.0.2.100/24
gw4: 192.0.2.1
ip6: 2001:db8::cafe
gw6: 2001:db8::1
state: present
- name: Add two IPv4 DNS server addresses
nmcli:
conn_name: my-eth1
type: ethernet
dns4:
- 192.0.2.53
- 198.51.100.53
state: present
- name: Make a profile usable for all compatible Ethernet interfaces
nmcli:
ctype: ethernet
name: my-eth1
ifname: '*'
state: present
- name: Change the property of a setting e.g. MTU
nmcli:
conn_name: my-eth1
mtu: 9000
type: ethernet
state: present
- name: Add VxLan
nmcli:
type: vxlan
conn_name: vxlan_test1
vxlan_id: 16
vxlan_local: 192.168.1.2
vxlan_remote: 192.168.1.5
- name: Add ipip
nmcli:
type: ipip
conn_name: ipip_test1
ip_tunnel_dev: eth0
ip_tunnel_local: 192.168.1.2
ip_tunnel_remote: 192.168.1.5
- name: Add sit
nmcli:
type: sit
conn_name: sit_test1
ip_tunnel_dev: eth0
ip_tunnel_local: 192.168.1.2
ip_tunnel_remote: 192.168.1.5
# nmcli exits with status 0 if it succeeds and exits with a status greater
# than zero when there is a failure. The following list of status codes may be
# returned:
#
# - 0 Success - indicates the operation succeeded
# - 1 Unknown or unspecified error
# - 2 Invalid user input, wrong nmcli invocation
# - 3 Timeout expired (see --wait option)
# - 4 Connection activation failed
# - 5 Connection deactivation failed
# - 6 Disconnecting device failed
# - 7 Connection deletion failed
# - 8 NetworkManager is not running
# - 9 nmcli and NetworkManager versions mismatch
# - 10 Connection, device, or access point does not exist.
'''
RETURN = r"""#
"""
import traceback
DBUS_IMP_ERR = None
try:
import dbus
HAVE_DBUS = True
except ImportError:
DBUS_IMP_ERR = traceback.format_exc()
HAVE_DBUS = False
NM_CLIENT_IMP_ERR = None
try:
import gi
gi.require_version('NMClient', '1.0')
gi.require_version('NetworkManager', '1.0')
from gi.repository import NetworkManager, NMClient
HAVE_NM_CLIENT = True
except (ImportError, ValueError):
NM_CLIENT_IMP_ERR = traceback.format_exc()
HAVE_NM_CLIENT = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
class Nmcli(object):
"""
This is the generic nmcli manipulation class that is subclassed based on platform.
A subclass may wish to override the following action methods:-
- create_connection()
- delete_connection()
- modify_connection()
- show_connection()
- up_connection()
- down_connection()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
if HAVE_DBUS:
bus = dbus.SystemBus()
# The following is going to be used in dbus code
DEVTYPES = {
1: "Ethernet",
2: "Wi-Fi",
5: "Bluetooth",
6: "OLPC",
7: "WiMAX",
8: "Modem",
9: "InfiniBand",
10: "Bond",
11: "VLAN",
12: "ADSL",
13: "Bridge",
14: "Generic",
15: "Team",
16: "VxLan",
17: "ipip",
18: "sit",
}
STATES = {
0: "Unknown",
10: "Unmanaged",
20: "Unavailable",
30: "Disconnected",
40: "Prepare",
50: "Config",
60: "Need Auth",
70: "IP Config",
80: "IP Check",
90: "Secondaries",
100: "Activated",
110: "Deactivating",
120: "Failed"
}
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.autoconnect = module.params['autoconnect']
self.conn_name = module.params['conn_name']
self.master = module.params['master']
self.ifname = module.params['ifname']
self.type = module.params['type']
self.ip4 = module.params['ip4']
self.gw4 = module.params['gw4']
self.dns4 = ' '.join(module.params['dns4']) if module.params.get('dns4') else None
self.dns4_search = ' '.join(module.params['dns4_search']) if module.params.get('dns4_search') else None
self.ip6 = module.params['ip6']
self.gw6 = module.params['gw6']
self.dns6 = ' '.join(module.params['dns6']) if module.params.get('dns6') else None
self.dns6_search = ' '.join(module.params['dns6_search']) if module.params.get('dns6_search') else None
self.mtu = module.params['mtu']
self.stp = module.params['stp']
self.priority = module.params['priority']
self.mode = module.params['mode']
self.miimon = module.params['miimon']
self.primary = module.params['primary']
self.downdelay = module.params['downdelay']
self.updelay = module.params['updelay']
self.arp_interval = module.params['arp_interval']
self.arp_ip_target = module.params['arp_ip_target']
self.slavepriority = module.params['slavepriority']
self.forwarddelay = module.params['forwarddelay']
self.hellotime = module.params['hellotime']
self.maxage = module.params['maxage']
self.ageingtime = module.params['ageingtime']
self.hairpin = module.params['hairpin']
self.path_cost = module.params['path_cost']
self.mac = module.params['mac']
self.vlanid = module.params['vlanid']
self.vlandev = module.params['vlandev']
self.flags = module.params['flags']
self.ingress = module.params['ingress']
self.egress = module.params['egress']
self.vxlan_id = module.params['vxlan_id']
self.vxlan_local = module.params['vxlan_local']
self.vxlan_remote = module.params['vxlan_remote']
self.ip_tunnel_dev = module.params['ip_tunnel_dev']
self.ip_tunnel_local = module.params['ip_tunnel_local']
self.ip_tunnel_remote = module.params['ip_tunnel_remote']
self.nmcli_bin = self.module.get_bin_path('nmcli', True)
self.dhcp_client_id = module.params['dhcp_client_id']
def execute_command(self, cmd, use_unsafe_shell=False, data=None):
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def merge_secrets(self, proxy, config, setting_name):
try:
# returns a dict of dicts mapping name::setting, where setting is a dict
# mapping key::value. Each member of the 'setting' dict is a secret
secrets = proxy.GetSecrets(setting_name)
# Copy the secrets into our connection config
for setting in secrets:
for key in secrets[setting]:
config[setting_name][key] = secrets[setting][key]
except Exception:
pass
def dict_to_string(self, d):
# Try to trivially translate a dictionary's elements into nice string
# formatting.
dstr = ""
for key in d:
val = d[key]
str_val = ""
add_string = True
if isinstance(val, dbus.Array):
for elt in val:
if isinstance(elt, dbus.Byte):
str_val += "%s " % int(elt)
elif isinstance(elt, dbus.String):
str_val += "%s" % elt
elif isinstance(val, dbus.Dictionary):
dstr += self.dict_to_string(val)
add_string = False
else:
str_val = val
if add_string:
dstr += "%s: %s\n" % (key, str_val)
return dstr
def connection_to_string(self, config):
# dump a connection configuration to use in list_connection_info
setting_list = []
for setting_name in config:
setting_list.append(self.dict_to_string(config[setting_name]))
return setting_list
@staticmethod
def bool_to_string(boolean):
if boolean:
return "yes"
else:
return "no"
def list_connection_info(self):
# Ask the settings service for the list of connections it provides
bus = dbus.SystemBus()
service_name = "org.freedesktop.NetworkManager"
settings = None
try:
proxy = bus.get_object(service_name, "/org/freedesktop/NetworkManager/Settings")
settings = dbus.Interface(proxy, "org.freedesktop.NetworkManager.Settings")
except dbus.exceptions.DBusException as e:
self.module.fail_json(msg="Unable to read Network Manager settings from DBus system bus: %s" % to_native(e),
details="Please check if NetworkManager is installed and"
"service network-manager is started.")
connection_paths = settings.ListConnections()
connection_list = []
# List each connection's name, UUID, and type
for path in connection_paths:
con_proxy = bus.get_object(service_name, path)
settings_connection = dbus.Interface(con_proxy, "org.freedesktop.NetworkManager.Settings.Connection")
config = settings_connection.GetSettings()
# Now get secrets too; we grab the secrets for each type of connection
# (since there isn't a "get all secrets" call because most of the time
# you only need 'wifi' secrets or '802.1x' secrets, not everything) and
# merge that into the configuration data - To use at a later stage
self.merge_secrets(settings_connection, config, '802-11-wireless')
self.merge_secrets(settings_connection, config, '802-11-wireless-security')
self.merge_secrets(settings_connection, config, '802-1x')
self.merge_secrets(settings_connection, config, 'gsm')
self.merge_secrets(settings_connection, config, 'cdma')
self.merge_secrets(settings_connection, config, 'ppp')
# Get the details of the 'connection' setting
s_con = config['connection']
connection_list.append(s_con['id'])
connection_list.append(s_con['uuid'])
connection_list.append(s_con['type'])
connection_list.append(self.connection_to_string(config))
return connection_list
def connection_exists(self):
# we are going to use name and type in this instance to find if that connection exists and is of type x
connections = self.list_connection_info()
for con_item in connections:
if self.conn_name == con_item:
return True
def down_connection(self):
cmd = [self.nmcli_bin, 'con', 'down', self.conn_name]
return self.execute_command(cmd)
def up_connection(self):
cmd = [self.nmcli_bin, 'con', 'up', self.conn_name]
return self.execute_command(cmd)
def create_connection_team(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'team', 'con-name']
# format for creating team interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_team(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_team_slave(self):
cmd = [self.nmcli_bin, 'connection', 'add', 'type', self.type, 'con-name']
# format for creating team-slave interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
cmd.append('master')
if self.conn_name is not None:
cmd.append(self.master)
return cmd
def modify_connection_team_slave(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name, 'connection.master', self.master]
# format for modifying team-slave interface
if self.mtu is not None:
cmd.append('802-3-ethernet.mtu')
cmd.append(self.mtu)
return cmd
def create_connection_bond(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bond', 'con-name']
# format for creating bond interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'mode': self.mode,
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'miimon': self.miimon,
'downdelay': self.downdelay,
'updelay': self.updelay,
'arp-interval': self.arp_interval,
'arp-ip-target': self.arp_ip_target,
'primary': self.primary,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bond(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
# format for modifying bond interface
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'miimon': self.miimon,
'downdelay': self.downdelay,
'updelay': self.updelay,
'arp-interval': self.arp_interval,
'arp-ip-target': self.arp_ip_target,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_bond_slave(self):
cmd = [self.nmcli_bin, 'connection', 'add', 'type', 'bond-slave', 'con-name']
# format for creating bond-slave interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
cmd.append('master')
if self.conn_name is not None:
cmd.append(self.master)
return cmd
def modify_connection_bond_slave(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name, 'connection.master', self.master]
# format for modifying bond-slave interface
return cmd
def create_connection_ethernet(self, conn_type='ethernet'):
# format for creating ethernet interface
# To add an Ethernet connection with static IP configuration, issue a command as follows
# - nmcli: name=add conn_name=my-eth1 ifname=eth1 type=ethernet ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con add con-name my-eth1 ifname eth1 type ethernet ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'add', 'type']
if conn_type == 'ethernet':
cmd.append('ethernet')
elif conn_type == 'generic':
cmd.append('generic')
cmd.append('con-name')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_ethernet(self, conn_type='ethernet'):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
# format for modifying ethernet interface
# To modify an Ethernet connection with static IP configuration, issue a command as follows
# - nmcli: conn_name=my-eth1 ifname=eth1 type=ethernet ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con mod con-name my-eth1 ifname eth1 type ethernet ipv4.address 192.0.2.100/24 ipv4.gateway 192.0.2.1
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'802-3-ethernet.mtu': self.mtu,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
if key == '802-3-ethernet.mtu' and conn_type != 'ethernet':
continue
cmd.extend([key, value])
return cmd
def create_connection_bridge(self):
# format for creating bridge interface
# To add an Bridge connection with static IP configuration, issue a command as follows
# - nmcli: name=add conn_name=my-eth1 ifname=eth1 type=bridge ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con add con-name my-eth1 ifname eth1 type bridge ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bridge', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'bridge.ageing-time': self.ageingtime,
'bridge.forward-delay': self.forwarddelay,
'bridge.hello-time': self.hellotime,
'bridge.mac-address': self.mac,
'bridge.max-age': self.maxage,
'bridge.priority': self.priority,
'bridge.stp': self.bool_to_string(self.stp)
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bridge(self):
# format for modifying bridge interface
# To add an Bridge connection with static IP configuration, issue a command as follows
# - nmcli: name=mod conn_name=my-eth1 ifname=eth1 type=bridge ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con mod my-eth1 ifname eth1 type bridge ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'bridge.ageing-time': self.ageingtime,
'bridge.forward-delay': self.forwarddelay,
'bridge.hello-time': self.hellotime,
'bridge.mac-address': self.mac,
'bridge.max-age': self.maxage,
'bridge.priority': self.priority,
'bridge.stp': self.bool_to_string(self.stp)
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_bridge_slave(self):
# format for creating bond-slave interface
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bridge-slave', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'master': self.master,
'bridge-port.path-cost': self.path_cost,
'bridge-port.hairpin': self.bool_to_string(self.hairpin),
'bridge-port.priority': self.slavepriority,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bridge_slave(self):
# format for modifying bond-slave interface
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'master': self.master,
'bridge-port.path-cost': self.path_cost,
'bridge-port.hairpin': self.bool_to_string(self.hairpin),
'bridge-port.priority': self.slavepriority,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_vlan(self):
cmd = [self.nmcli_bin]
cmd.append('con')
cmd.append('add')
cmd.append('type')
cmd.append('vlan')
cmd.append('con-name')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vlan%s' % self.vlanid)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('vlan%s' % self.vlanid)
params = {'dev': self.vlandev,
'id': str(self.vlanid),
'ip4': self.ip4 or '',
'gw4': self.gw4 or '',
'ip6': self.ip6 or '',
'gw6': self.gw6 or '',
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_vlan(self):
cmd = [self.nmcli_bin]
cmd.append('con')
cmd.append('mod')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vlan%s' % self.vlanid)
params = {'vlan.parent': self.vlandev,
'vlan.id': str(self.vlanid),
'ipv4.address': self.ip4 or '',
'ipv4.gateway': self.gw4 or '',
'ipv4.dns': self.dns4 or '',
'ipv6.address': self.ip6 or '',
'ipv6.gateway': self.gw6 or '',
'ipv6.dns': self.dns6 or '',
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_vxlan(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'vxlan', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vxlan%s' % self.vxlanid)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('vxan%s' % self.vxlanid)
params = {'vxlan.id': self.vxlan_id,
'vxlan.local': self.vxlan_local,
'vxlan.remote': self.vxlan_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_vxlan(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vxlan%s' % self.vxlanid)
params = {'vxlan.id': self.vxlan_id,
'vxlan.local': self.vxlan_local,
'vxlan.remote': self.vxlan_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_ipip(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'ip-tunnel', 'mode', 'ipip', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('ipip%s' % self.ip_tunnel_dev)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('ipip%s' % self.ipip_dev)
if self.ip_tunnel_dev is not None:
cmd.append('dev')
cmd.append(self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_ipip(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('ipip%s' % self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_sit(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'ip-tunnel', 'mode', 'sit', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('sit%s' % self.ip_tunnel_dev)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('sit%s' % self.ipip_dev)
if self.ip_tunnel_dev is not None:
cmd.append('dev')
cmd.append(self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_sit(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('sit%s' % self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection(self):
cmd = []
if self.type == 'team':
if (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_team()
self.execute_command(cmd)
cmd = self.modify_connection_team()
self.execute_command(cmd)
return self.up_connection()
elif (self.dns4 is None) or (self.dns6 is None):
cmd = self.create_connection_team()
elif self.type == 'team-slave':
if self.mtu is not None:
cmd = self.create_connection_team_slave()
self.execute_command(cmd)
cmd = self.modify_connection_team_slave()
return self.execute_command(cmd)
else:
cmd = self.create_connection_team_slave()
elif self.type == 'bond':
if (self.mtu is not None) or (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_bond()
self.execute_command(cmd)
cmd = self.modify_connection_bond()
self.execute_command(cmd)
return self.up_connection()
else:
cmd = self.create_connection_bond()
elif self.type == 'bond-slave':
cmd = self.create_connection_bond_slave()
elif self.type == 'ethernet':
if (self.mtu is not None) or (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_ethernet()
self.execute_command(cmd)
cmd = self.modify_connection_ethernet()
self.execute_command(cmd)
return self.up_connection()
else:
cmd = self.create_connection_ethernet()
elif self.type == 'bridge':
cmd = self.create_connection_bridge()
elif self.type == 'bridge-slave':
cmd = self.create_connection_bridge_slave()
elif self.type == 'vlan':
cmd = self.create_connection_vlan()
elif self.type == 'vxlan':
cmd = self.create_connection_vxlan()
elif self.type == 'ipip':
cmd = self.create_connection_ipip()
elif self.type == 'sit':
cmd = self.create_connection_sit()
elif self.type == 'generic':
cmd = self.create_connection_ethernet(conn_type='generic')
if cmd:
return self.execute_command(cmd)
else:
self.module.fail_json(msg="Type of device or network connection is required "
"while performing 'create' operation. Please specify 'type' as an argument.")
def remove_connection(self):
# self.down_connection()
cmd = [self.nmcli_bin, 'con', 'del', self.conn_name]
return self.execute_command(cmd)
def modify_connection(self):
cmd = []
if self.type == 'team':
cmd = self.modify_connection_team()
elif self.type == 'team-slave':
cmd = self.modify_connection_team_slave()
elif self.type == 'bond':
cmd = self.modify_connection_bond()
elif self.type == 'bond-slave':
cmd = self.modify_connection_bond_slave()
elif self.type == 'ethernet':
cmd = self.modify_connection_ethernet()
elif self.type == 'bridge':
cmd = self.modify_connection_bridge()
elif self.type == 'bridge-slave':
cmd = self.modify_connection_bridge_slave()
elif self.type == 'vlan':
cmd = self.modify_connection_vlan()
elif self.type == 'vxlan':
cmd = self.modify_connection_vxlan()
elif self.type == 'ipip':
cmd = self.modify_connection_ipip()
elif self.type == 'sit':
cmd = self.modify_connection_sit()
elif self.type == 'generic':
cmd = self.modify_connection_ethernet(conn_type='generic')
if cmd:
return self.execute_command(cmd)
else:
self.module.fail_json(msg="Type of device or network connection is required "
"while performing 'modify' operation. Please specify 'type' as an argument.")
def main():
# Parsing argument file
module = AnsibleModule(
argument_spec=dict(
autoconnect=dict(type='bool', default=True),
state=dict(type='str', required=True, choices=['absent', 'present']),
conn_name=dict(type='str', required=True),
master=dict(type='str'),
ifname=dict(type='str'),
type=dict(type='str',
choices=['bond', 'bond-slave', 'bridge', 'bridge-slave', 'ethernet', 'generic', 'ipip', 'sit', 'team', 'team-slave', 'vlan', 'vxlan']),
ip4=dict(type='str'),
gw4=dict(type='str'),
dns4=dict(type='list'),
dns4_search=dict(type='list'),
dhcp_client_id=dict(type='str'),
ip6=dict(type='str'),
gw6=dict(type='str'),
dns6=dict(type='list'),
dns6_search=dict(type='list'),
# Bond Specific vars
mode=dict(type='str', default='balance-rr',
choices=['802.3ad', 'active-backup', 'balance-alb', 'balance-rr', 'balance-tlb', 'balance-xor', 'broadcast']),
miimon=dict(type='int'),
downdelay=dict(type='int'),
updelay=dict(type='int'),
arp_interval=dict(type='int'),
arp_ip_target=dict(type='str'),
primary=dict(type='str'),
# general usage
mtu=dict(type='int'),
mac=dict(type='str'),
# bridge specific vars
stp=dict(type='bool', default=True),
priority=dict(type='int', default=128),
slavepriority=dict(type='int', default=32),
forwarddelay=dict(type='int', default=15),
hellotime=dict(type='int', default=2),
maxage=dict(type='int', default=20),
ageingtime=dict(type='int', default=300),
hairpin=dict(type='bool', default=True),
path_cost=dict(type='int', default=100),
# vlan specific vars
vlanid=dict(type='int'),
vlandev=dict(type='str'),
flags=dict(type='str'),
ingress=dict(type='str'),
egress=dict(type='str'),
# vxlan specific vars
vxlan_id=dict(type='int'),
vxlan_local=dict(type='str'),
vxlan_remote=dict(type='str'),
# ip-tunnel specific vars
ip_tunnel_dev=dict(type='str'),
ip_tunnel_local=dict(type='str'),
ip_tunnel_remote=dict(type='str'),
),
supports_check_mode=True,
)
if not HAVE_DBUS:
module.fail_json(msg=missing_required_lib('dbus'), exception=DBUS_IMP_ERR)
if not HAVE_NM_CLIENT:
module.fail_json(msg=missing_required_lib('NetworkManager glib API'), exception=NM_CLIENT_IMP_ERR)
nmcli = Nmcli(module)
(rc, out, err) = (None, '', '')
result = {'conn_name': nmcli.conn_name, 'state': nmcli.state}
# check for issues
if nmcli.conn_name is None:
nmcli.module.fail_json(msg="Please specify a name for the connection")
# team-slave checks
if nmcli.type == 'team-slave' and nmcli.master is None:
nmcli.module.fail_json(msg="Please specify a name for the master")
if nmcli.type == 'team-slave' and nmcli.ifname is None:
nmcli.module.fail_json(msg="Please specify an interface name for the connection")
if nmcli.state == 'absent':
if nmcli.connection_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.down_connection()
(rc, out, err) = nmcli.remove_connection()
if rc != 0:
module.fail_json(name=('No Connection named %s exists' % nmcli.conn_name), msg=err, rc=rc)
elif nmcli.state == 'present':
if nmcli.connection_exists():
# modify connection (note: this function is check mode aware)
# result['Connection']=('Connection %s of Type %s is not being added' % (nmcli.conn_name, nmcli.type))
result['Exists'] = 'Connections do exist so we are modifying them'
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.modify_connection()
if not nmcli.connection_exists():
result['Connection'] = ('Connection %s of Type %s is being added' % (nmcli.conn_name, nmcli.type))
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.create_connection()
if rc is not None and rc != 0:
module.fail_json(name=nmcli.conn_name, msg=err, rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,291 |
libnm-glib-dev used by nmcli is deprecated in Debian 10
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`nmcli` module use deprecated lib, at least in Debian Buster
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
nmcli
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /workspace/ansible.cfg
configured module search path = ['/workspace/library']
ansible python module location = /home/vscode/pyvenv/lib/python3.7/site-packages/ansible
executable location = /home/vscode/pyvenv/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/workspace/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
DEFAULT_ACTION_PLUGIN_PATH(/workspace/ansible.cfg) = ['/workspace/plugins/action']
DEFAULT_MODULE_PATH(/workspace/ansible.cfg) = ['/workspace/library']
DEFAULT_REMOTE_USER(/workspace/ansible.cfg) = root
DEFAULT_ROLES_PATH(/workspace/ansible.cfg) = ['/workspace/roles']
DEFAULT_STDOUT_CALLBACK(/workspace/ansible.cfg) = debug
DISPLAY_SKIPPED_HOSTS(/workspace/ansible.cfg) = False
HOST_KEY_CHECKING(/workspace/ansible.cfg) = False
INTERPRETER_PYTHON(/workspace/ansible.cfg) = /usr/bin/python3
RETRY_FILES_ENABLED(/workspace/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian 10.1
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
vars:
nm_packages:
- dbus
- libnm-dev
- network-manager
- network-manager-dev
- python3-dbus
- python3-gi
tasks:
- name: packages
apt:
force_apt_get: true
name: "{{ nm_packages }}"
state: present
- name: interface
nmcli:
conn_name: "{{ ansible_default_ipv4.interface }}"
type: ethernet
ifname: "{{ ansible_default_ipv4.interface }}"
ip4: 10.0.0.10/24
gw4: 10.0.0.1
dns4: 10.0.0.1
autoconnect: true
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should have configured the network interface through nmcli.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [network : interface] *****************************************************
'False' (type string). If this does not look like what you expect, quote the
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: Namespace NMClient not available entire value to ensure it does not change.
fatal: [default]: FAILED! => {
"changed": false
}
MSG:
Failed to import the required Python library (NetworkManager glib API) on ubicast-dga-aio's Python /usr/bin/python3. Please read module documentation and install in the appropriate location
```
|
https://github.com/ansible/ansible/issues/63291
|
https://github.com/ansible/ansible/pull/65726
|
7ee3103a86f0179faead92d2481ff175d9a69747
|
663171e21820f1696b93c3f182626b0fd006b61b
| 2019-10-09T17:58:16Z |
python
| 2019-12-21T05:58:58Z |
lib/ansible/modules/net_tools/nmcli.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Chris Long <[email protected]> <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: nmcli
author:
- Chris Long (@alcamie101)
short_description: Manage Networking
requirements:
- dbus
- NetworkManager-glib
- nmcli
version_added: "2.0"
description:
- Manage the network devices. Create, modify and manage various connection and device type e.g., ethernet, teams, bonds, vlans etc.
- 'On CentOS and Fedora like systems, the requirements can be met by installing the following packages: NetworkManager-glib,
libnm-qt-devel.x86_64, nm-connection-editor.x86_64, libsemanage-python, policycoreutils-python.'
- 'On Ubuntu and Debian like systems, the requirements can be met by installing the following packages: network-manager,
python-dbus (or python3-dbus, depending on the Python version in use), libnm-glib-dev.'
- 'On openSUSE, the requirements can be met by installing the following packages: NetworkManager, python2-dbus-python (or
python3-dbus-python), typelib-1_0-NMClient-1_0 and typelib-1_0-NetworkManager-1_0.'
options:
state:
description:
- Whether the device should exist or not, taking action if the state is different from what is stated.
type: str
required: true
choices: [ absent, present ]
autoconnect:
description:
- Whether the connection should start on boot.
- Whether the connection profile can be automatically activated
type: bool
default: yes
conn_name:
description:
- The name used to call the connection. Pattern is <type>[-<ifname>][-<num>].
type: str
required: true
ifname:
description:
- The interface to bind the connection to.
- The connection will only be applicable to this interface name.
- A special value of C('*') can be used for interface-independent connections.
- The ifname argument is mandatory for all connection types except bond, team, bridge and vlan.
- This parameter defaults to C(conn_name) when left unset.
type: str
type:
description:
- This is the type of device or network connection that you wish to create or modify.
- Type C(generic) is added in Ansible 2.5.
type: str
choices: [ bond, bond-slave, bridge, bridge-slave, ethernet, generic, ipip, sit, team, team-slave, vlan, vxlan ]
mode:
description:
- This is the type of device or network connection that you wish to create for a bond, team or bridge.
type: str
choices: [ 802.3ad, active-backup, balance-alb, balance-rr, balance-tlb, balance-xor, broadcast ]
default: balance-rr
master:
description:
- Master <master (ifname, or connection UUID or conn_name) of bridge, team, bond master connection profile.
type: str
ip4:
description:
- The IPv4 address to this interface.
- Use the format C(192.0.2.24/24).
type: str
gw4:
description:
- The IPv4 gateway for this interface.
- Use the format C(192.0.2.1).
type: str
dns4:
description:
- A list of up to 3 dns servers.
- IPv4 format e.g. to add two IPv4 DNS server addresses, use C(192.0.2.53 198.51.100.53).
type: list
dns4_search:
description:
- A list of DNS search domains.
type: list
version_added: '2.5'
ip6:
description:
- The IPv6 address to this interface.
- Use the format C(abbe::cafe).
type: str
gw6:
description:
- The IPv6 gateway for this interface.
- Use the format C(2001:db8::1).
type: str
dns6:
description:
- A list of up to 3 dns servers.
- IPv6 format e.g. to add two IPv6 DNS server addresses, use C(2001:4860:4860::8888 2001:4860:4860::8844).
type: list
dns6_search:
description:
- A list of DNS search domains.
type: list
version_added: '2.5'
mtu:
description:
- The connection MTU, e.g. 9000. This can't be applied when creating the interface and is done once the interface has been created.
- Can be used when modifying Team, VLAN, Ethernet (Future plans to implement wifi, pppoe, infiniband)
- This parameter defaults to C(1500) when unset.
type: int
dhcp_client_id:
description:
- DHCP Client Identifier sent to the DHCP server.
type: str
version_added: "2.5"
primary:
description:
- This is only used with bond and is the primary interface name (for "active-backup" mode), this is the usually the 'ifname'.
type: str
miimon:
description:
- This is only used with bond - miimon.
- This parameter defaults to C(100) when unset.
type: int
downdelay:
description:
- This is only used with bond - downdelay.
type: int
updelay:
description:
- This is only used with bond - updelay.
type: int
arp_interval:
description:
- This is only used with bond - ARP interval.
type: int
arp_ip_target:
description:
- This is only used with bond - ARP IP target.
type: str
stp:
description:
- This is only used with bridge and controls whether Spanning Tree Protocol (STP) is enabled for this bridge.
type: bool
default: yes
priority:
description:
- This is only used with 'bridge' - sets STP priority.
type: int
default: 128
forwarddelay:
description:
- This is only used with bridge - [forward-delay <2-30>] STP forwarding delay, in seconds.
type: int
default: 15
hellotime:
description:
- This is only used with bridge - [hello-time <1-10>] STP hello time, in seconds.
type: int
default: 2
maxage:
description:
- This is only used with bridge - [max-age <6-42>] STP maximum message age, in seconds.
type: int
default: 20
ageingtime:
description:
- This is only used with bridge - [ageing-time <0-1000000>] the Ethernet MAC address aging time, in seconds.
type: int
default: 300
mac:
description:
- This is only used with bridge - MAC address of the bridge.
- Note this requires a recent kernel feature, originally introduced in 3.15 upstream kernel.
slavepriority:
description:
- This is only used with 'bridge-slave' - [<0-63>] - STP priority of this slave.
type: int
default: 32
path_cost:
description:
- This is only used with 'bridge-slave' - [<1-65535>] - STP port cost for destinations via this slave.
type: int
default: 100
hairpin:
description:
- This is only used with 'bridge-slave' - 'hairpin mode' for the slave, which allows frames to be sent back out through the slave the
frame was received on.
type: bool
default: yes
vlanid:
description:
- This is only used with VLAN - VLAN ID in range <0-4095>.
type: int
vlandev:
description:
- This is only used with VLAN - parent device this VLAN is on, can use ifname.
type: str
flags:
description:
- This is only used with VLAN - flags.
type: str
ingress:
description:
- This is only used with VLAN - VLAN ingress priority mapping.
type: str
egress:
description:
- This is only used with VLAN - VLAN egress priority mapping.
type: str
vxlan_id:
description:
- This is only used with VXLAN - VXLAN ID.
type: int
version_added: "2.8"
vxlan_remote:
description:
- This is only used with VXLAN - VXLAN destination IP address.
type: str
version_added: "2.8"
vxlan_local:
description:
- This is only used with VXLAN - VXLAN local IP address.
type: str
version_added: "2.8"
ip_tunnel_dev:
description:
- This is used with IPIP/SIT - parent device this IPIP/SIT tunnel, can use ifname.
type: str
version_added: "2.8"
ip_tunnel_remote:
description:
- This is used with IPIP/SIT - IPIP/SIT destination IP address.
type: str
version_added: "2.8"
ip_tunnel_local:
description:
- This is used with IPIP/SIT - IPIP/SIT local IP address.
type: str
version_added: "2.8"
'''
EXAMPLES = r'''
# These examples are using the following inventory:
#
# ## Directory layout:
#
# |_/inventory/cloud-hosts
# | /group_vars/openstack-stage.yml
# | /host_vars/controller-01.openstack.host.com
# | /host_vars/controller-02.openstack.host.com
# |_/playbook/library/nmcli.py
# | /playbook-add.yml
# | /playbook-del.yml
# ```
#
# ## inventory examples
# ### groups_vars
# ```yml
# ---
# #devops_os_define_network
# storage_gw: "192.0.2.254"
# external_gw: "198.51.100.254"
# tenant_gw: "203.0.113.254"
#
# #Team vars
# nmcli_team:
# - conn_name: tenant
# ip4: '{{ tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: external
# ip4: '{{ external_ip }}'
# gw4: '{{ external_gw }}'
# - conn_name: storage
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# nmcli_team_slave:
# - conn_name: em1
# ifname: em1
# master: tenant
# - conn_name: em2
# ifname: em2
# master: tenant
# - conn_name: p2p1
# ifname: p2p1
# master: storage
# - conn_name: p2p2
# ifname: p2p2
# master: external
#
# #bond vars
# nmcli_bond:
# - conn_name: tenant
# ip4: '{{ tenant_ip }}'
# gw4: ''
# mode: balance-rr
# - conn_name: external
# ip4: '{{ external_ip }}'
# gw4: ''
# mode: balance-rr
# - conn_name: storage
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# mode: balance-rr
# nmcli_bond_slave:
# - conn_name: em1
# ifname: em1
# master: tenant
# - conn_name: em2
# ifname: em2
# master: tenant
# - conn_name: p2p1
# ifname: p2p1
# master: storage
# - conn_name: p2p2
# ifname: p2p2
# master: external
#
# #ethernet vars
# nmcli_ethernet:
# - conn_name: em1
# ifname: em1
# ip4: '{{ tenant_ip }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: em2
# ifname: em2
# ip4: '{{ tenant_ip1 }}'
# gw4: '{{ tenant_gw }}'
# - conn_name: p2p1
# ifname: p2p1
# ip4: '{{ storage_ip }}'
# gw4: '{{ storage_gw }}'
# - conn_name: p2p2
# ifname: p2p2
# ip4: '{{ external_ip }}'
# gw4: '{{ external_gw }}'
# ```
#
# ### host_vars
# ```yml
# ---
# storage_ip: "192.0.2.91/23"
# external_ip: "198.51.100.23/21"
# tenant_ip: "203.0.113.77/23"
# ```
## playbook-add.yml example
---
- hosts: openstack-stage
remote_user: root
tasks:
- name: install needed network manager libs
package:
name:
- NetworkManager-glib
- nm-connection-editor
- libsemanage-python
- policycoreutils-python
state: present
##### Working with all cloud nodes - Teaming
- name: Try nmcli add team - conn_name only & ip4 gw4
nmcli:
type: team
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
state: present
with_items:
- '{{ nmcli_team }}'
- name: Try nmcli add teams-slave
nmcli:
type: team-slave
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
master: '{{ item.master }}'
state: present
with_items:
- '{{ nmcli_team_slave }}'
###### Working with all cloud nodes - Bonding
- name: Try nmcli add bond - conn_name only & ip4 gw4 mode
nmcli:
type: bond
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
mode: '{{ item.mode }}'
state: present
with_items:
- '{{ nmcli_bond }}'
- name: Try nmcli add bond-slave
nmcli:
type: bond-slave
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
master: '{{ item.master }}'
state: present
with_items:
- '{{ nmcli_bond_slave }}'
##### Working with all cloud nodes - Ethernet
- name: Try nmcli add Ethernet - conn_name only & ip4 gw4
nmcli:
type: ethernet
conn_name: '{{ item.conn_name }}'
ip4: '{{ item.ip4 }}'
gw4: '{{ item.gw4 }}'
state: present
with_items:
- '{{ nmcli_ethernet }}'
## playbook-del.yml example
- hosts: openstack-stage
remote_user: root
tasks:
- name: Try nmcli del team - multiple
nmcli:
conn_name: '{{ item.conn_name }}'
state: absent
with_items:
- conn_name: em1
- conn_name: em2
- conn_name: p1p1
- conn_name: p1p2
- conn_name: p2p1
- conn_name: p2p2
- conn_name: tenant
- conn_name: storage
- conn_name: external
- conn_name: team-em1
- conn_name: team-em2
- conn_name: team-p1p1
- conn_name: team-p1p2
- conn_name: team-p2p1
- conn_name: team-p2p2
- name: Add an Ethernet connection with static IP configuration
nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4: 192.0.2.100/24
gw4: 192.0.2.1
state: present
- name: Add an Team connection with static IP configuration
nmcli:
conn_name: my-team1
ifname: my-team1
type: team
ip4: 192.0.2.100/24
gw4: 192.0.2.1
state: present
autoconnect: yes
- name: Optionally, at the same time specify IPv6 addresses for the device
nmcli:
conn_name: my-eth1
ifname: eth1
type: ethernet
ip4: 192.0.2.100/24
gw4: 192.0.2.1
ip6: 2001:db8::cafe
gw6: 2001:db8::1
state: present
- name: Add two IPv4 DNS server addresses
nmcli:
conn_name: my-eth1
type: ethernet
dns4:
- 192.0.2.53
- 198.51.100.53
state: present
- name: Make a profile usable for all compatible Ethernet interfaces
nmcli:
ctype: ethernet
name: my-eth1
ifname: '*'
state: present
- name: Change the property of a setting e.g. MTU
nmcli:
conn_name: my-eth1
mtu: 9000
type: ethernet
state: present
- name: Add VxLan
nmcli:
type: vxlan
conn_name: vxlan_test1
vxlan_id: 16
vxlan_local: 192.168.1.2
vxlan_remote: 192.168.1.5
- name: Add ipip
nmcli:
type: ipip
conn_name: ipip_test1
ip_tunnel_dev: eth0
ip_tunnel_local: 192.168.1.2
ip_tunnel_remote: 192.168.1.5
- name: Add sit
nmcli:
type: sit
conn_name: sit_test1
ip_tunnel_dev: eth0
ip_tunnel_local: 192.168.1.2
ip_tunnel_remote: 192.168.1.5
# nmcli exits with status 0 if it succeeds and exits with a status greater
# than zero when there is a failure. The following list of status codes may be
# returned:
#
# - 0 Success - indicates the operation succeeded
# - 1 Unknown or unspecified error
# - 2 Invalid user input, wrong nmcli invocation
# - 3 Timeout expired (see --wait option)
# - 4 Connection activation failed
# - 5 Connection deactivation failed
# - 6 Disconnecting device failed
# - 7 Connection deletion failed
# - 8 NetworkManager is not running
# - 9 nmcli and NetworkManager versions mismatch
# - 10 Connection, device, or access point does not exist.
'''
RETURN = r"""#
"""
import traceback
DBUS_IMP_ERR = None
try:
import dbus
HAVE_DBUS = True
except ImportError:
DBUS_IMP_ERR = traceback.format_exc()
HAVE_DBUS = False
NM_CLIENT_IMP_ERR = None
try:
import gi
gi.require_version('NMClient', '1.0')
gi.require_version('NetworkManager', '1.0')
from gi.repository import NetworkManager, NMClient
HAVE_NM_CLIENT = True
except (ImportError, ValueError):
NM_CLIENT_IMP_ERR = traceback.format_exc()
HAVE_NM_CLIENT = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
class Nmcli(object):
"""
This is the generic nmcli manipulation class that is subclassed based on platform.
A subclass may wish to override the following action methods:-
- create_connection()
- delete_connection()
- modify_connection()
- show_connection()
- up_connection()
- down_connection()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
if HAVE_DBUS:
bus = dbus.SystemBus()
# The following is going to be used in dbus code
DEVTYPES = {
1: "Ethernet",
2: "Wi-Fi",
5: "Bluetooth",
6: "OLPC",
7: "WiMAX",
8: "Modem",
9: "InfiniBand",
10: "Bond",
11: "VLAN",
12: "ADSL",
13: "Bridge",
14: "Generic",
15: "Team",
16: "VxLan",
17: "ipip",
18: "sit",
}
STATES = {
0: "Unknown",
10: "Unmanaged",
20: "Unavailable",
30: "Disconnected",
40: "Prepare",
50: "Config",
60: "Need Auth",
70: "IP Config",
80: "IP Check",
90: "Secondaries",
100: "Activated",
110: "Deactivating",
120: "Failed"
}
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.autoconnect = module.params['autoconnect']
self.conn_name = module.params['conn_name']
self.master = module.params['master']
self.ifname = module.params['ifname']
self.type = module.params['type']
self.ip4 = module.params['ip4']
self.gw4 = module.params['gw4']
self.dns4 = ' '.join(module.params['dns4']) if module.params.get('dns4') else None
self.dns4_search = ' '.join(module.params['dns4_search']) if module.params.get('dns4_search') else None
self.ip6 = module.params['ip6']
self.gw6 = module.params['gw6']
self.dns6 = ' '.join(module.params['dns6']) if module.params.get('dns6') else None
self.dns6_search = ' '.join(module.params['dns6_search']) if module.params.get('dns6_search') else None
self.mtu = module.params['mtu']
self.stp = module.params['stp']
self.priority = module.params['priority']
self.mode = module.params['mode']
self.miimon = module.params['miimon']
self.primary = module.params['primary']
self.downdelay = module.params['downdelay']
self.updelay = module.params['updelay']
self.arp_interval = module.params['arp_interval']
self.arp_ip_target = module.params['arp_ip_target']
self.slavepriority = module.params['slavepriority']
self.forwarddelay = module.params['forwarddelay']
self.hellotime = module.params['hellotime']
self.maxage = module.params['maxage']
self.ageingtime = module.params['ageingtime']
self.hairpin = module.params['hairpin']
self.path_cost = module.params['path_cost']
self.mac = module.params['mac']
self.vlanid = module.params['vlanid']
self.vlandev = module.params['vlandev']
self.flags = module.params['flags']
self.ingress = module.params['ingress']
self.egress = module.params['egress']
self.vxlan_id = module.params['vxlan_id']
self.vxlan_local = module.params['vxlan_local']
self.vxlan_remote = module.params['vxlan_remote']
self.ip_tunnel_dev = module.params['ip_tunnel_dev']
self.ip_tunnel_local = module.params['ip_tunnel_local']
self.ip_tunnel_remote = module.params['ip_tunnel_remote']
self.nmcli_bin = self.module.get_bin_path('nmcli', True)
self.dhcp_client_id = module.params['dhcp_client_id']
def execute_command(self, cmd, use_unsafe_shell=False, data=None):
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def merge_secrets(self, proxy, config, setting_name):
try:
# returns a dict of dicts mapping name::setting, where setting is a dict
# mapping key::value. Each member of the 'setting' dict is a secret
secrets = proxy.GetSecrets(setting_name)
# Copy the secrets into our connection config
for setting in secrets:
for key in secrets[setting]:
config[setting_name][key] = secrets[setting][key]
except Exception:
pass
def dict_to_string(self, d):
# Try to trivially translate a dictionary's elements into nice string
# formatting.
dstr = ""
for key in d:
val = d[key]
str_val = ""
add_string = True
if isinstance(val, dbus.Array):
for elt in val:
if isinstance(elt, dbus.Byte):
str_val += "%s " % int(elt)
elif isinstance(elt, dbus.String):
str_val += "%s" % elt
elif isinstance(val, dbus.Dictionary):
dstr += self.dict_to_string(val)
add_string = False
else:
str_val = val
if add_string:
dstr += "%s: %s\n" % (key, str_val)
return dstr
def connection_to_string(self, config):
# dump a connection configuration to use in list_connection_info
setting_list = []
for setting_name in config:
setting_list.append(self.dict_to_string(config[setting_name]))
return setting_list
@staticmethod
def bool_to_string(boolean):
if boolean:
return "yes"
else:
return "no"
def list_connection_info(self):
# Ask the settings service for the list of connections it provides
bus = dbus.SystemBus()
service_name = "org.freedesktop.NetworkManager"
settings = None
try:
proxy = bus.get_object(service_name, "/org/freedesktop/NetworkManager/Settings")
settings = dbus.Interface(proxy, "org.freedesktop.NetworkManager.Settings")
except dbus.exceptions.DBusException as e:
self.module.fail_json(msg="Unable to read Network Manager settings from DBus system bus: %s" % to_native(e),
details="Please check if NetworkManager is installed and"
"service network-manager is started.")
connection_paths = settings.ListConnections()
connection_list = []
# List each connection's name, UUID, and type
for path in connection_paths:
con_proxy = bus.get_object(service_name, path)
settings_connection = dbus.Interface(con_proxy, "org.freedesktop.NetworkManager.Settings.Connection")
config = settings_connection.GetSettings()
# Now get secrets too; we grab the secrets for each type of connection
# (since there isn't a "get all secrets" call because most of the time
# you only need 'wifi' secrets or '802.1x' secrets, not everything) and
# merge that into the configuration data - To use at a later stage
self.merge_secrets(settings_connection, config, '802-11-wireless')
self.merge_secrets(settings_connection, config, '802-11-wireless-security')
self.merge_secrets(settings_connection, config, '802-1x')
self.merge_secrets(settings_connection, config, 'gsm')
self.merge_secrets(settings_connection, config, 'cdma')
self.merge_secrets(settings_connection, config, 'ppp')
# Get the details of the 'connection' setting
s_con = config['connection']
connection_list.append(s_con['id'])
connection_list.append(s_con['uuid'])
connection_list.append(s_con['type'])
connection_list.append(self.connection_to_string(config))
return connection_list
def connection_exists(self):
# we are going to use name and type in this instance to find if that connection exists and is of type x
connections = self.list_connection_info()
for con_item in connections:
if self.conn_name == con_item:
return True
def down_connection(self):
cmd = [self.nmcli_bin, 'con', 'down', self.conn_name]
return self.execute_command(cmd)
def up_connection(self):
cmd = [self.nmcli_bin, 'con', 'up', self.conn_name]
return self.execute_command(cmd)
def create_connection_team(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'team', 'con-name']
# format for creating team interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_team(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_team_slave(self):
cmd = [self.nmcli_bin, 'connection', 'add', 'type', self.type, 'con-name']
# format for creating team-slave interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
cmd.append('master')
if self.conn_name is not None:
cmd.append(self.master)
return cmd
def modify_connection_team_slave(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name, 'connection.master', self.master]
# format for modifying team-slave interface
if self.mtu is not None:
cmd.append('802-3-ethernet.mtu')
cmd.append(self.mtu)
return cmd
def create_connection_bond(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bond', 'con-name']
# format for creating bond interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'mode': self.mode,
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'miimon': self.miimon,
'downdelay': self.downdelay,
'updelay': self.updelay,
'arp-interval': self.arp_interval,
'arp-ip-target': self.arp_ip_target,
'primary': self.primary,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bond(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
# format for modifying bond interface
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'miimon': self.miimon,
'downdelay': self.downdelay,
'updelay': self.updelay,
'arp-interval': self.arp_interval,
'arp-ip-target': self.arp_ip_target,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_bond_slave(self):
cmd = [self.nmcli_bin, 'connection', 'add', 'type', 'bond-slave', 'con-name']
# format for creating bond-slave interface
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
cmd.append('master')
if self.conn_name is not None:
cmd.append(self.master)
return cmd
def modify_connection_bond_slave(self):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name, 'connection.master', self.master]
# format for modifying bond-slave interface
return cmd
def create_connection_ethernet(self, conn_type='ethernet'):
# format for creating ethernet interface
# To add an Ethernet connection with static IP configuration, issue a command as follows
# - nmcli: name=add conn_name=my-eth1 ifname=eth1 type=ethernet ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con add con-name my-eth1 ifname eth1 type ethernet ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'add', 'type']
if conn_type == 'ethernet':
cmd.append('ethernet')
elif conn_type == 'generic':
cmd.append('generic')
cmd.append('con-name')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_ethernet(self, conn_type='ethernet'):
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
# format for modifying ethernet interface
# To modify an Ethernet connection with static IP configuration, issue a command as follows
# - nmcli: conn_name=my-eth1 ifname=eth1 type=ethernet ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con mod con-name my-eth1 ifname eth1 type ethernet ipv4.address 192.0.2.100/24 ipv4.gateway 192.0.2.1
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv4.dns': self.dns4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'ipv6.dns': self.dns6,
'autoconnect': self.bool_to_string(self.autoconnect),
'ipv4.dns-search': self.dns4_search,
'ipv6.dns-search': self.dns6_search,
'802-3-ethernet.mtu': self.mtu,
'ipv4.dhcp-client-id': self.dhcp_client_id,
}
for key, value in options.items():
if value is not None:
if key == '802-3-ethernet.mtu' and conn_type != 'ethernet':
continue
cmd.extend([key, value])
return cmd
def create_connection_bridge(self):
# format for creating bridge interface
# To add an Bridge connection with static IP configuration, issue a command as follows
# - nmcli: name=add conn_name=my-eth1 ifname=eth1 type=bridge ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con add con-name my-eth1 ifname eth1 type bridge ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bridge', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'ip4': self.ip4,
'gw4': self.gw4,
'ip6': self.ip6,
'gw6': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'bridge.ageing-time': self.ageingtime,
'bridge.forward-delay': self.forwarddelay,
'bridge.hello-time': self.hellotime,
'bridge.mac-address': self.mac,
'bridge.max-age': self.maxage,
'bridge.priority': self.priority,
'bridge.stp': self.bool_to_string(self.stp)
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bridge(self):
# format for modifying bridge interface
# To add an Bridge connection with static IP configuration, issue a command as follows
# - nmcli: name=mod conn_name=my-eth1 ifname=eth1 type=bridge ip4=192.0.2.100/24 gw4=192.0.2.1 state=present
# nmcli con mod my-eth1 ifname eth1 type bridge ip4 192.0.2.100/24 gw4 192.0.2.1
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'ipv4.address': self.ip4,
'ipv4.gateway': self.gw4,
'ipv6.address': self.ip6,
'ipv6.gateway': self.gw6,
'autoconnect': self.bool_to_string(self.autoconnect),
'bridge.ageing-time': self.ageingtime,
'bridge.forward-delay': self.forwarddelay,
'bridge.hello-time': self.hellotime,
'bridge.mac-address': self.mac,
'bridge.max-age': self.maxage,
'bridge.priority': self.priority,
'bridge.stp': self.bool_to_string(self.stp)
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_bridge_slave(self):
# format for creating bond-slave interface
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'bridge-slave', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
options = {
'master': self.master,
'bridge-port.path-cost': self.path_cost,
'bridge-port.hairpin': self.bool_to_string(self.hairpin),
'bridge-port.priority': self.slavepriority,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def modify_connection_bridge_slave(self):
# format for modifying bond-slave interface
cmd = [self.nmcli_bin, 'con', 'mod', self.conn_name]
options = {
'master': self.master,
'bridge-port.path-cost': self.path_cost,
'bridge-port.hairpin': self.bool_to_string(self.hairpin),
'bridge-port.priority': self.slavepriority,
}
for key, value in options.items():
if value is not None:
cmd.extend([key, value])
return cmd
def create_connection_vlan(self):
cmd = [self.nmcli_bin]
cmd.append('con')
cmd.append('add')
cmd.append('type')
cmd.append('vlan')
cmd.append('con-name')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vlan%s' % self.vlanid)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('vlan%s' % self.vlanid)
params = {'dev': self.vlandev,
'id': str(self.vlanid),
'ip4': self.ip4 or '',
'gw4': self.gw4 or '',
'ip6': self.ip6 or '',
'gw6': self.gw6 or '',
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_vlan(self):
cmd = [self.nmcli_bin]
cmd.append('con')
cmd.append('mod')
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vlan%s' % self.vlanid)
params = {'vlan.parent': self.vlandev,
'vlan.id': str(self.vlanid),
'ipv4.address': self.ip4 or '',
'ipv4.gateway': self.gw4 or '',
'ipv4.dns': self.dns4 or '',
'ipv6.address': self.ip6 or '',
'ipv6.gateway': self.gw6 or '',
'ipv6.dns': self.dns6 or '',
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_vxlan(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'vxlan', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vxlan%s' % self.vxlanid)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('vxan%s' % self.vxlanid)
params = {'vxlan.id': self.vxlan_id,
'vxlan.local': self.vxlan_local,
'vxlan.remote': self.vxlan_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_vxlan(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
else:
cmd.append('vxlan%s' % self.vxlanid)
params = {'vxlan.id': self.vxlan_id,
'vxlan.local': self.vxlan_local,
'vxlan.remote': self.vxlan_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_ipip(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'ip-tunnel', 'mode', 'ipip', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('ipip%s' % self.ip_tunnel_dev)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('ipip%s' % self.ipip_dev)
if self.ip_tunnel_dev is not None:
cmd.append('dev')
cmd.append(self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_ipip(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('ipip%s' % self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection_sit(self):
cmd = [self.nmcli_bin, 'con', 'add', 'type', 'ip-tunnel', 'mode', 'sit', 'con-name']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('sit%s' % self.ip_tunnel_dev)
cmd.append('ifname')
if self.ifname is not None:
cmd.append(self.ifname)
elif self.conn_name is not None:
cmd.append(self.conn_name)
else:
cmd.append('sit%s' % self.ipip_dev)
if self.ip_tunnel_dev is not None:
cmd.append('dev')
cmd.append(self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def modify_connection_sit(self):
cmd = [self.nmcli_bin, 'con', 'mod']
if self.conn_name is not None:
cmd.append(self.conn_name)
elif self.ifname is not None:
cmd.append(self.ifname)
elif self.ip_tunnel_dev is not None:
cmd.append('sit%s' % self.ip_tunnel_dev)
params = {'ip-tunnel.local': self.ip_tunnel_local,
'ip-tunnel.remote': self.ip_tunnel_remote,
'autoconnect': self.bool_to_string(self.autoconnect)
}
for k, v in params.items():
cmd.extend([k, v])
return cmd
def create_connection(self):
cmd = []
if self.type == 'team':
if (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_team()
self.execute_command(cmd)
cmd = self.modify_connection_team()
self.execute_command(cmd)
return self.up_connection()
elif (self.dns4 is None) or (self.dns6 is None):
cmd = self.create_connection_team()
elif self.type == 'team-slave':
if self.mtu is not None:
cmd = self.create_connection_team_slave()
self.execute_command(cmd)
cmd = self.modify_connection_team_slave()
return self.execute_command(cmd)
else:
cmd = self.create_connection_team_slave()
elif self.type == 'bond':
if (self.mtu is not None) or (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_bond()
self.execute_command(cmd)
cmd = self.modify_connection_bond()
self.execute_command(cmd)
return self.up_connection()
else:
cmd = self.create_connection_bond()
elif self.type == 'bond-slave':
cmd = self.create_connection_bond_slave()
elif self.type == 'ethernet':
if (self.mtu is not None) or (self.dns4 is not None) or (self.dns6 is not None):
cmd = self.create_connection_ethernet()
self.execute_command(cmd)
cmd = self.modify_connection_ethernet()
self.execute_command(cmd)
return self.up_connection()
else:
cmd = self.create_connection_ethernet()
elif self.type == 'bridge':
cmd = self.create_connection_bridge()
elif self.type == 'bridge-slave':
cmd = self.create_connection_bridge_slave()
elif self.type == 'vlan':
cmd = self.create_connection_vlan()
elif self.type == 'vxlan':
cmd = self.create_connection_vxlan()
elif self.type == 'ipip':
cmd = self.create_connection_ipip()
elif self.type == 'sit':
cmd = self.create_connection_sit()
elif self.type == 'generic':
cmd = self.create_connection_ethernet(conn_type='generic')
if cmd:
return self.execute_command(cmd)
else:
self.module.fail_json(msg="Type of device or network connection is required "
"while performing 'create' operation. Please specify 'type' as an argument.")
def remove_connection(self):
# self.down_connection()
cmd = [self.nmcli_bin, 'con', 'del', self.conn_name]
return self.execute_command(cmd)
def modify_connection(self):
cmd = []
if self.type == 'team':
cmd = self.modify_connection_team()
elif self.type == 'team-slave':
cmd = self.modify_connection_team_slave()
elif self.type == 'bond':
cmd = self.modify_connection_bond()
elif self.type == 'bond-slave':
cmd = self.modify_connection_bond_slave()
elif self.type == 'ethernet':
cmd = self.modify_connection_ethernet()
elif self.type == 'bridge':
cmd = self.modify_connection_bridge()
elif self.type == 'bridge-slave':
cmd = self.modify_connection_bridge_slave()
elif self.type == 'vlan':
cmd = self.modify_connection_vlan()
elif self.type == 'vxlan':
cmd = self.modify_connection_vxlan()
elif self.type == 'ipip':
cmd = self.modify_connection_ipip()
elif self.type == 'sit':
cmd = self.modify_connection_sit()
elif self.type == 'generic':
cmd = self.modify_connection_ethernet(conn_type='generic')
if cmd:
return self.execute_command(cmd)
else:
self.module.fail_json(msg="Type of device or network connection is required "
"while performing 'modify' operation. Please specify 'type' as an argument.")
def main():
# Parsing argument file
module = AnsibleModule(
argument_spec=dict(
autoconnect=dict(type='bool', default=True),
state=dict(type='str', required=True, choices=['absent', 'present']),
conn_name=dict(type='str', required=True),
master=dict(type='str'),
ifname=dict(type='str'),
type=dict(type='str',
choices=['bond', 'bond-slave', 'bridge', 'bridge-slave', 'ethernet', 'generic', 'ipip', 'sit', 'team', 'team-slave', 'vlan', 'vxlan']),
ip4=dict(type='str'),
gw4=dict(type='str'),
dns4=dict(type='list'),
dns4_search=dict(type='list'),
dhcp_client_id=dict(type='str'),
ip6=dict(type='str'),
gw6=dict(type='str'),
dns6=dict(type='list'),
dns6_search=dict(type='list'),
# Bond Specific vars
mode=dict(type='str', default='balance-rr',
choices=['802.3ad', 'active-backup', 'balance-alb', 'balance-rr', 'balance-tlb', 'balance-xor', 'broadcast']),
miimon=dict(type='int'),
downdelay=dict(type='int'),
updelay=dict(type='int'),
arp_interval=dict(type='int'),
arp_ip_target=dict(type='str'),
primary=dict(type='str'),
# general usage
mtu=dict(type='int'),
mac=dict(type='str'),
# bridge specific vars
stp=dict(type='bool', default=True),
priority=dict(type='int', default=128),
slavepriority=dict(type='int', default=32),
forwarddelay=dict(type='int', default=15),
hellotime=dict(type='int', default=2),
maxage=dict(type='int', default=20),
ageingtime=dict(type='int', default=300),
hairpin=dict(type='bool', default=True),
path_cost=dict(type='int', default=100),
# vlan specific vars
vlanid=dict(type='int'),
vlandev=dict(type='str'),
flags=dict(type='str'),
ingress=dict(type='str'),
egress=dict(type='str'),
# vxlan specific vars
vxlan_id=dict(type='int'),
vxlan_local=dict(type='str'),
vxlan_remote=dict(type='str'),
# ip-tunnel specific vars
ip_tunnel_dev=dict(type='str'),
ip_tunnel_local=dict(type='str'),
ip_tunnel_remote=dict(type='str'),
),
supports_check_mode=True,
)
if not HAVE_DBUS:
module.fail_json(msg=missing_required_lib('dbus'), exception=DBUS_IMP_ERR)
if not HAVE_NM_CLIENT:
module.fail_json(msg=missing_required_lib('NetworkManager glib API'), exception=NM_CLIENT_IMP_ERR)
nmcli = Nmcli(module)
(rc, out, err) = (None, '', '')
result = {'conn_name': nmcli.conn_name, 'state': nmcli.state}
# check for issues
if nmcli.conn_name is None:
nmcli.module.fail_json(msg="Please specify a name for the connection")
# team-slave checks
if nmcli.type == 'team-slave' and nmcli.master is None:
nmcli.module.fail_json(msg="Please specify a name for the master")
if nmcli.type == 'team-slave' and nmcli.ifname is None:
nmcli.module.fail_json(msg="Please specify an interface name for the connection")
if nmcli.state == 'absent':
if nmcli.connection_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.down_connection()
(rc, out, err) = nmcli.remove_connection()
if rc != 0:
module.fail_json(name=('No Connection named %s exists' % nmcli.conn_name), msg=err, rc=rc)
elif nmcli.state == 'present':
if nmcli.connection_exists():
# modify connection (note: this function is check mode aware)
# result['Connection']=('Connection %s of Type %s is not being added' % (nmcli.conn_name, nmcli.type))
result['Exists'] = 'Connections do exist so we are modifying them'
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.modify_connection()
if not nmcli.connection_exists():
result['Connection'] = ('Connection %s of Type %s is being added' % (nmcli.conn_name, nmcli.type))
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = nmcli.create_connection()
if rc is not None and rc != 0:
module.fail_json(name=nmcli.conn_name, msg=err, rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,578 |
terraform plan_file custom path broken
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/misc/terraform.py#L353 expects the plan_file to be a filename not a path to a file.inside of project_path. Works with relative paths but not with absolute paths. Lucky for me I use a path inside project_path but it worked before with paths outside too.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
terraform
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/adam/.ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/adam/.local/lib/python3.6/site-packages/ansible
executable location = /home/adam/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
use absolute path for plan_file
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
uses the filepath provided
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [run terraform apply] ***************************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find plan_file \"/home/.../plan.tfplan\", check the path and try again."}
```
|
https://github.com/ansible/ansible/issues/58578
|
https://github.com/ansible/ansible/pull/58812
|
663171e21820f1696b93c3f182626b0fd006b61b
|
e711d01ed1337b28578a7ccc3d114b2641c871a8
| 2019-07-01T08:40:00Z |
python
| 2019-12-21T11:49:38Z |
changelogs/fragments/58812-support_absolute_paths_additionally.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,578 |
terraform plan_file custom path broken
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/misc/terraform.py#L353 expects the plan_file to be a filename not a path to a file.inside of project_path. Works with relative paths but not with absolute paths. Lucky for me I use a path inside project_path but it worked before with paths outside too.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
terraform
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/adam/.ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/adam/.local/lib/python3.6/site-packages/ansible
executable location = /home/adam/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
use absolute path for plan_file
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
uses the filepath provided
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [run terraform apply] ***************************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find plan_file \"/home/.../plan.tfplan\", check the path and try again."}
```
|
https://github.com/ansible/ansible/issues/58578
|
https://github.com/ansible/ansible/pull/58812
|
663171e21820f1696b93c3f182626b0fd006b61b
|
e711d01ed1337b28578a7ccc3d114b2641c871a8
| 2019-07-01T08:40:00Z |
python
| 2019-12-21T11:49:38Z |
lib/ansible/modules/cloud/misc/terraform.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Ryan Scott Brown <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: terraform
short_description: Manages a Terraform deployment (and plans)
description:
- Provides support for deploying resources with Terraform and pulling
resource information back into Ansible.
version_added: "2.5"
options:
state:
choices: ['planned', 'present', 'absent']
description:
- Goal state of given stage/project
required: false
default: present
binary_path:
description:
- The path of a terraform binary to use, relative to the 'service_path'
unless you supply an absolute path.
required: false
project_path:
description:
- The path to the root of the Terraform directory with the
vars.tf/main.tf/etc to use.
required: true
workspace:
description:
- The terraform workspace to work with.
required: false
default: default
version_added: 2.7
purge_workspace:
description:
- Only works with state = absent
- If true, the workspace will be deleted after the "terraform destroy" action.
- The 'default' workspace will not be deleted.
required: false
default: false
type: bool
version_added: 2.7
plan_file:
description:
- The path to an existing Terraform plan file to apply. If this is not
specified, Ansible will build a new TF plan and execute it.
Note that this option is required if 'state' has the 'planned' value.
required: false
state_file:
description:
- The path to an existing Terraform state file to use when building plan.
If this is not specified, the default `terraform.tfstate` will be used.
- This option is ignored when plan is specified.
required: false
variables_file:
description:
- The path to a variables file for Terraform to fill into the TF
configurations.
required: false
variables:
description:
- A group of key-values to override template variables or those in
variables files.
required: false
targets:
description:
- A list of specific resources to target in this plan/application. The
resources selected here will also auto-include any dependencies.
required: false
lock:
description:
- Enable statefile locking, if you use a service that accepts locks (such
as S3+DynamoDB) to store your statefile.
required: false
type: bool
lock_timeout:
description:
- How long to maintain the lock on the statefile, if you use a service
that accepts locks (such as S3+DynamoDB).
required: false
force_init:
description:
- To avoid duplicating infra, if a state file can't be found this will
force a `terraform init`. Generally, this should be turned off unless
you intend to provision an entirely new Terraform deployment.
default: false
required: false
type: bool
backend_config:
description:
- A group of key-values to provide at init stage to the -backend-config parameter.
required: false
version_added: 2.7
notes:
- To just run a `terraform plan`, use check mode.
requirements: [ "terraform" ]
author: "Ryan Scott Brown (@ryansb)"
'''
EXAMPLES = """
# Basic deploy of a service
- terraform:
project_path: '{{ project_dir }}'
state: present
# Define the backend configuration at init
- terraform:
project_path: 'project/'
state: "{{ state }}"
force_init: true
backend_config:
region: "eu-west-1"
bucket: "some-bucket"
key: "random.tfstate"
"""
RETURN = """
outputs:
type: complex
description: A dictionary of all the TF outputs by their assigned name. Use `.outputs.MyOutputName.value` to access the value.
returned: on success
sample: '{"bukkit_arn": {"sensitive": false, "type": "string", "value": "arn:aws:s3:::tf-test-bukkit"}'
contains:
sensitive:
type: bool
returned: always
description: Whether Terraform has marked this value as sensitive
type:
type: str
returned: always
description: The type of the value (string, int, etc)
value:
returned: always
description: The value of the output as interpolated by Terraform
stdout:
type: str
description: Full `terraform` command stdout, in case you want to display it or examine the event log
returned: always
sample: ''
command:
type: str
description: Full `terraform` command built by this module, in case you want to re-run the command outside the module or debug a problem.
returned: always
sample: terraform apply ...
"""
import os
import json
import tempfile
import traceback
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils.basic import AnsibleModule
DESTROY_ARGS = ('destroy', '-no-color', '-force')
APPLY_ARGS = ('apply', '-no-color', '-input=false', '-auto-approve=true')
module = None
def preflight_validation(bin_path, project_path, variables_args=None, plan_file=None):
if project_path in [None, ''] or '/' not in project_path:
module.fail_json(msg="Path for Terraform project can not be None or ''.")
if not os.path.exists(bin_path):
module.fail_json(msg="Path for Terraform binary '{0}' doesn't exist on this host - check the path and try again please.".format(bin_path))
if not os.path.isdir(project_path):
module.fail_json(msg="Path for Terraform project '{0}' doesn't exist on this host - check the path and try again please.".format(project_path))
rc, out, err = module.run_command([bin_path, 'validate'] + variables_args, cwd=project_path, use_unsafe_shell=True)
if rc != 0:
module.fail_json(msg="Failed to validate Terraform configuration files:\r\n{0}".format(err))
def _state_args(state_file):
if state_file and os.path.exists(state_file):
return ['-state', state_file]
if state_file and not os.path.exists(state_file):
module.fail_json(msg='Could not find state_file "{0}", check the path and try again.'.format(state_file))
return []
def init_plugins(bin_path, project_path, backend_config):
command = [bin_path, 'init', '-input=false']
if backend_config:
for key, val in backend_config.items():
command.extend([
'-backend-config',
shlex_quote('{0}={1}'.format(key, val))
])
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to initialize Terraform modules:\r\n{0}".format(err))
def get_workspace_context(bin_path, project_path):
workspace_ctx = {"current": "default", "all": []}
command = [bin_path, 'workspace', 'list', '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to list Terraform workspaces:\r\n{0}".format(err))
for item in out.split('\n'):
stripped_item = item.strip()
if not stripped_item:
continue
elif stripped_item.startswith('* '):
workspace_ctx["current"] = stripped_item.replace('* ', '')
else:
workspace_ctx["all"].append(stripped_item)
return workspace_ctx
def _workspace_cmd(bin_path, project_path, action, workspace):
command = [bin_path, 'workspace', action, workspace, '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to {0} workspace:\r\n{1}".format(action, err))
return rc, out, err
def create_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'new', workspace)
def select_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'select', workspace)
def remove_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'delete', workspace)
def build_plan(command, project_path, variables_args, state_file, targets, state, plan_path=None):
if plan_path is None:
f, plan_path = tempfile.mkstemp(suffix='.tfplan')
plan_command = [command[0], 'plan', '-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path]
for t in (module.params.get('targets') or []):
plan_command.extend(['-target', t])
plan_command.extend(_state_args(state_file))
rc, out, err = module.run_command(plan_command + variables_args, cwd=project_path, use_unsafe_shell=True)
if rc == 0:
# no changes
return plan_path, False, out, err, plan_command if state == 'planned' else command
elif rc == 1:
# failure to plan
module.fail_json(msg='Terraform plan could not be created\r\nSTDOUT: {0}\r\n\r\nSTDERR: {1}'.format(out, err))
elif rc == 2:
# changes, but successful
return plan_path, True, out, err, plan_command if state == 'planned' else command
module.fail_json(msg='Terraform plan failed with unexpected exit code {0}. \r\nSTDOUT: {1}\r\n\r\nSTDERR: {2}'.format(rc, out, err))
def main():
global module
module = AnsibleModule(
argument_spec=dict(
project_path=dict(required=True, type='path'),
binary_path=dict(type='path'),
workspace=dict(required=False, type='str', default='default'),
purge_workspace=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent', 'planned']),
variables=dict(type='dict'),
variables_file=dict(type='path'),
plan_file=dict(type='path'),
state_file=dict(type='path'),
targets=dict(type='list', default=[]),
lock=dict(type='bool', default=True),
lock_timeout=dict(type='int',),
force_init=dict(type='bool', default=False),
backend_config=dict(type='dict', default=None),
),
required_if=[('state', 'planned', ['plan_file'])],
supports_check_mode=True,
)
project_path = module.params.get('project_path')
bin_path = module.params.get('binary_path')
workspace = module.params.get('workspace')
purge_workspace = module.params.get('purge_workspace')
state = module.params.get('state')
variables = module.params.get('variables') or {}
variables_file = module.params.get('variables_file')
plan_file = module.params.get('plan_file')
state_file = module.params.get('state_file')
force_init = module.params.get('force_init')
backend_config = module.params.get('backend_config')
if bin_path is not None:
command = [bin_path]
else:
command = [module.get_bin_path('terraform', required=True)]
if force_init:
init_plugins(command[0], project_path, backend_config)
workspace_ctx = get_workspace_context(command[0], project_path)
if workspace_ctx["current"] != workspace:
if workspace not in workspace_ctx["all"]:
create_workspace(command[0], project_path, workspace)
else:
select_workspace(command[0], project_path, workspace)
if state == 'present':
command.extend(APPLY_ARGS)
elif state == 'absent':
command.extend(DESTROY_ARGS)
variables_args = []
for k, v in variables.items():
variables_args.extend([
'-var',
'{0}={1}'.format(k, v)
])
if variables_file:
variables_args.extend(['-var-file', variables_file])
preflight_validation(command[0], project_path, variables_args)
if module.params.get('lock') is not None:
if module.params.get('lock'):
command.append('-lock=true')
else:
command.append('-lock=false')
if module.params.get('lock_timeout') is not None:
command.append('-lock-timeout=%ds' % module.params.get('lock_timeout'))
for t in (module.params.get('targets') or []):
command.extend(['-target', t])
# we aren't sure if this plan will result in changes, so assume yes
needs_application, changed = True, False
if state == 'absent':
command.extend(variables_args)
elif state == 'present' and plan_file:
if os.path.exists(project_path + "/" + plan_file):
command.append(plan_file)
else:
module.fail_json(msg='Could not find plan_file "{0}", check the path and try again.'.format(plan_file))
else:
plan_file, needs_application, out, err, command = build_plan(command, project_path, variables_args, state_file,
module.params.get('targets'), state, plan_file)
command.append(plan_file)
out, err = '', ''
if needs_application and not module.check_mode and not state == 'planned':
rc, out, err = module.run_command(command, cwd=project_path)
# checks out to decide if changes were made during execution
if '0 added, 0 changed' not in out and not state == "absent" or '0 destroyed' not in out:
changed = True
if rc != 0:
module.fail_json(
msg="Failure when executing Terraform command. Exited {0}.\nstdout: {1}\nstderr: {2}".format(rc, out, err),
command=' '.join(command)
)
outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file)
rc, outputs_text, outputs_err = module.run_command(outputs_command, cwd=project_path)
if rc == 1:
module.warn("Could not get Terraform outputs. This usually means none have been defined.\nstdout: {0}\nstderr: {1}".format(outputs_text, outputs_err))
outputs = {}
elif rc != 0:
module.fail_json(
msg="Failure when getting Terraform outputs. "
"Exited {0}.\nstdout: {1}\nstderr: {2}".format(rc, outputs_text, outputs_err),
command=' '.join(outputs_command))
else:
outputs = json.loads(outputs_text)
# Restore the Terraform workspace found when running the module
if workspace_ctx["current"] != workspace:
select_workspace(command[0], project_path, workspace_ctx["current"])
if state == 'absent' and workspace != 'default' and purge_workspace is True:
remove_workspace(command[0], project_path, workspace)
module.exit_json(changed=changed, state=state, workspace=workspace, outputs=outputs, stdout=out, stderr=err, command=' '.join(command))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,788 |
fix terraform module support for no workspace
|
##### SUMMARY
Some terraform backends don't support workspaces. The module should not fail when using them.
The module fails with messages like:
- `Failed to list Terraform workspaces:\r\nworkspaces not supported\n`
- `Failed to list Terraform workspaces:\r\nnamed states not supported\n`
Related to:
- https://github.com/ansible/ansible/issues/43134#issuecomment-480873362
- https://github.com/ansible/ansible/issues/59089
- https://github.com/ansible/ansible/pull/57402
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /home/$USER/.ansible.cfg
configured module search path = ['/home/$USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/$USER/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
DEFAULT_HOST_LIST(/home/$USER/.ansible.cfg) = ['/home/$USER/.ansible/hosts']
```
##### OS / ENVIRONMENT
Archlinux up to date.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
- Have an artifactory account accessible
- configure terraform with the artifactory backend
```hcl
terraform {
backend "artifactory" {
url = "https://artifactory.example.com/artifactory"
repo = "terraform"
subpath = "test"
}
}
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: test terraform no workspace
hosts: localhost
tasks:
- name: apply configuration
terraform:
project_path: "{{ playbook_dir }}/terraform/"
force_init: true
backend_config:
username: "{{ artifactory_user }}"
password: "{{ artifactory_password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The module should work without workspaces support from terraform. No error message.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: $USER
<127.0.0.1> EXEC /bin/sh -c 'echo ~$USER && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161 `" && echo ansible-tmp-1573660522.1152759-71895853408161="` echo /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161 `" ) && sleep 0'
Using module file /usr/lib/python3.7/site-packages/ansible/modules/cloud/misc/terraform.py
<127.0.0.1> PUT /home/$USER/.ansible/tmp/ansible-local-100380yrb34ja/tmpcdahl6p2 TO /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/AnsiballZ_terraform.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/ /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/AnsiballZ_terraform.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/AnsiballZ_terraform.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backend_config": {
"password": "XXXXX",
"username": "XXXXX"
},
"binary_path": null,
"force_init": true,
"lock": true,
"lock_timeout": null,
"plan_file": null,
"project_path": "XXX",
"purge_workspace": false,
"state": "present",
"state_file": null,
"targets": [],
"variables": {},
"variables_file": null,
"workspace": "default"
}
},
"msg": "Failed to list Terraform workspaces:\r\nworkspaces not supported\n"
```
##### FIX
A fix was proposed: https://github.com/ansible/ansible/pull/57402
I tried another approach: defaulting to the default workspace when the workspace listing fails:
```diff
diff --git a/lib/ansible/modules/cloud/misc/terraform.py b/lib/ansible/modules/cloud/misc/terraform.py
index 1d54be2f4b..5c9832c8b0 100644
--- a/lib/ansible/modules/cloud/misc/terraform.py
+++ b/lib/ansible/modules/cloud/misc/terraform.py
@@ -208,7 +208,7 @@ def get_workspace_context(bin_path, project_path):
workspace_ctx = {"current": "default", "all": []}
command = [bin_path, 'workspace', 'list', '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
- if rc != 0:
+ if rc != 0 and err.strip() != "workspaces not supported":
module.fail_json(msg="Failed to list Terraform workspaces:\r\n{0}".format(err))
for item in out.split('\n'):
stripped_item = item.strip()
```
The test works for the artifactory backend, but I suspect it won't on other backends. The error message `workspaces not supported` seems backend specific.
|
https://github.com/ansible/ansible/issues/64788
|
https://github.com/ansible/ansible/pull/65044
|
e711d01ed1337b28578a7ccc3d114b2641c871a8
|
b580f2929dd9048e0d2752463b62994d1114ae1b
| 2019-11-13T16:18:35Z |
python
| 2019-12-21T11:50:31Z |
changelogs/fragments/65044-fix-terraform-no-workspace.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,788 |
fix terraform module support for no workspace
|
##### SUMMARY
Some terraform backends don't support workspaces. The module should not fail when using them.
The module fails with messages like:
- `Failed to list Terraform workspaces:\r\nworkspaces not supported\n`
- `Failed to list Terraform workspaces:\r\nnamed states not supported\n`
Related to:
- https://github.com/ansible/ansible/issues/43134#issuecomment-480873362
- https://github.com/ansible/ansible/issues/59089
- https://github.com/ansible/ansible/pull/57402
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /home/$USER/.ansible.cfg
configured module search path = ['/home/$USER/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/$USER/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
DEFAULT_HOST_LIST(/home/$USER/.ansible.cfg) = ['/home/$USER/.ansible/hosts']
```
##### OS / ENVIRONMENT
Archlinux up to date.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
- Have an artifactory account accessible
- configure terraform with the artifactory backend
```hcl
terraform {
backend "artifactory" {
url = "https://artifactory.example.com/artifactory"
repo = "terraform"
subpath = "test"
}
}
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: test terraform no workspace
hosts: localhost
tasks:
- name: apply configuration
terraform:
project_path: "{{ playbook_dir }}/terraform/"
force_init: true
backend_config:
username: "{{ artifactory_user }}"
password: "{{ artifactory_password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The module should work without workspaces support from terraform. No error message.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: $USER
<127.0.0.1> EXEC /bin/sh -c 'echo ~$USER && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161 `" && echo ansible-tmp-1573660522.1152759-71895853408161="` echo /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161 `" ) && sleep 0'
Using module file /usr/lib/python3.7/site-packages/ansible/modules/cloud/misc/terraform.py
<127.0.0.1> PUT /home/$USER/.ansible/tmp/ansible-local-100380yrb34ja/tmpcdahl6p2 TO /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/AnsiballZ_terraform.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/ /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/AnsiballZ_terraform.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/AnsiballZ_terraform.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/$USER/.ansible/tmp/ansible-tmp-1573660522.1152759-71895853408161/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backend_config": {
"password": "XXXXX",
"username": "XXXXX"
},
"binary_path": null,
"force_init": true,
"lock": true,
"lock_timeout": null,
"plan_file": null,
"project_path": "XXX",
"purge_workspace": false,
"state": "present",
"state_file": null,
"targets": [],
"variables": {},
"variables_file": null,
"workspace": "default"
}
},
"msg": "Failed to list Terraform workspaces:\r\nworkspaces not supported\n"
```
##### FIX
A fix was proposed: https://github.com/ansible/ansible/pull/57402
I tried another approach: defaulting to the default workspace when the workspace listing fails:
```diff
diff --git a/lib/ansible/modules/cloud/misc/terraform.py b/lib/ansible/modules/cloud/misc/terraform.py
index 1d54be2f4b..5c9832c8b0 100644
--- a/lib/ansible/modules/cloud/misc/terraform.py
+++ b/lib/ansible/modules/cloud/misc/terraform.py
@@ -208,7 +208,7 @@ def get_workspace_context(bin_path, project_path):
workspace_ctx = {"current": "default", "all": []}
command = [bin_path, 'workspace', 'list', '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
- if rc != 0:
+ if rc != 0 and err.strip() != "workspaces not supported":
module.fail_json(msg="Failed to list Terraform workspaces:\r\n{0}".format(err))
for item in out.split('\n'):
stripped_item = item.strip()
```
The test works for the artifactory backend, but I suspect it won't on other backends. The error message `workspaces not supported` seems backend specific.
|
https://github.com/ansible/ansible/issues/64788
|
https://github.com/ansible/ansible/pull/65044
|
e711d01ed1337b28578a7ccc3d114b2641c871a8
|
b580f2929dd9048e0d2752463b62994d1114ae1b
| 2019-11-13T16:18:35Z |
python
| 2019-12-21T11:50:31Z |
lib/ansible/modules/cloud/misc/terraform.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Ryan Scott Brown <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: terraform
short_description: Manages a Terraform deployment (and plans)
description:
- Provides support for deploying resources with Terraform and pulling
resource information back into Ansible.
version_added: "2.5"
options:
state:
choices: ['planned', 'present', 'absent']
description:
- Goal state of given stage/project
required: false
default: present
binary_path:
description:
- The path of a terraform binary to use, relative to the 'service_path'
unless you supply an absolute path.
required: false
project_path:
description:
- The path to the root of the Terraform directory with the
vars.tf/main.tf/etc to use.
required: true
workspace:
description:
- The terraform workspace to work with.
required: false
default: default
version_added: 2.7
purge_workspace:
description:
- Only works with state = absent
- If true, the workspace will be deleted after the "terraform destroy" action.
- The 'default' workspace will not be deleted.
required: false
default: false
type: bool
version_added: 2.7
plan_file:
description:
- The path to an existing Terraform plan file to apply. If this is not
specified, Ansible will build a new TF plan and execute it.
Note that this option is required if 'state' has the 'planned' value.
required: false
state_file:
description:
- The path to an existing Terraform state file to use when building plan.
If this is not specified, the default `terraform.tfstate` will be used.
- This option is ignored when plan is specified.
required: false
variables_file:
description:
- The path to a variables file for Terraform to fill into the TF
configurations.
required: false
variables:
description:
- A group of key-values to override template variables or those in
variables files.
required: false
targets:
description:
- A list of specific resources to target in this plan/application. The
resources selected here will also auto-include any dependencies.
required: false
lock:
description:
- Enable statefile locking, if you use a service that accepts locks (such
as S3+DynamoDB) to store your statefile.
required: false
type: bool
lock_timeout:
description:
- How long to maintain the lock on the statefile, if you use a service
that accepts locks (such as S3+DynamoDB).
required: false
force_init:
description:
- To avoid duplicating infra, if a state file can't be found this will
force a `terraform init`. Generally, this should be turned off unless
you intend to provision an entirely new Terraform deployment.
default: false
required: false
type: bool
backend_config:
description:
- A group of key-values to provide at init stage to the -backend-config parameter.
required: false
version_added: 2.7
notes:
- To just run a `terraform plan`, use check mode.
requirements: [ "terraform" ]
author: "Ryan Scott Brown (@ryansb)"
'''
EXAMPLES = """
# Basic deploy of a service
- terraform:
project_path: '{{ project_dir }}'
state: present
# Define the backend configuration at init
- terraform:
project_path: 'project/'
state: "{{ state }}"
force_init: true
backend_config:
region: "eu-west-1"
bucket: "some-bucket"
key: "random.tfstate"
"""
RETURN = """
outputs:
type: complex
description: A dictionary of all the TF outputs by their assigned name. Use `.outputs.MyOutputName.value` to access the value.
returned: on success
sample: '{"bukkit_arn": {"sensitive": false, "type": "string", "value": "arn:aws:s3:::tf-test-bukkit"}'
contains:
sensitive:
type: bool
returned: always
description: Whether Terraform has marked this value as sensitive
type:
type: str
returned: always
description: The type of the value (string, int, etc)
value:
returned: always
description: The value of the output as interpolated by Terraform
stdout:
type: str
description: Full `terraform` command stdout, in case you want to display it or examine the event log
returned: always
sample: ''
command:
type: str
description: Full `terraform` command built by this module, in case you want to re-run the command outside the module or debug a problem.
returned: always
sample: terraform apply ...
"""
import os
import json
import tempfile
import traceback
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils.basic import AnsibleModule
DESTROY_ARGS = ('destroy', '-no-color', '-force')
APPLY_ARGS = ('apply', '-no-color', '-input=false', '-auto-approve=true')
module = None
def preflight_validation(bin_path, project_path, variables_args=None, plan_file=None):
if project_path in [None, ''] or '/' not in project_path:
module.fail_json(msg="Path for Terraform project can not be None or ''.")
if not os.path.exists(bin_path):
module.fail_json(msg="Path for Terraform binary '{0}' doesn't exist on this host - check the path and try again please.".format(bin_path))
if not os.path.isdir(project_path):
module.fail_json(msg="Path for Terraform project '{0}' doesn't exist on this host - check the path and try again please.".format(project_path))
rc, out, err = module.run_command([bin_path, 'validate'] + variables_args, cwd=project_path, use_unsafe_shell=True)
if rc != 0:
module.fail_json(msg="Failed to validate Terraform configuration files:\r\n{0}".format(err))
def _state_args(state_file):
if state_file and os.path.exists(state_file):
return ['-state', state_file]
if state_file and not os.path.exists(state_file):
module.fail_json(msg='Could not find state_file "{0}", check the path and try again.'.format(state_file))
return []
def init_plugins(bin_path, project_path, backend_config):
command = [bin_path, 'init', '-input=false']
if backend_config:
for key, val in backend_config.items():
command.extend([
'-backend-config',
shlex_quote('{0}={1}'.format(key, val))
])
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to initialize Terraform modules:\r\n{0}".format(err))
def get_workspace_context(bin_path, project_path):
workspace_ctx = {"current": "default", "all": []}
command = [bin_path, 'workspace', 'list', '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to list Terraform workspaces:\r\n{0}".format(err))
for item in out.split('\n'):
stripped_item = item.strip()
if not stripped_item:
continue
elif stripped_item.startswith('* '):
workspace_ctx["current"] = stripped_item.replace('* ', '')
else:
workspace_ctx["all"].append(stripped_item)
return workspace_ctx
def _workspace_cmd(bin_path, project_path, action, workspace):
command = [bin_path, 'workspace', action, workspace, '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to {0} workspace:\r\n{1}".format(action, err))
return rc, out, err
def create_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'new', workspace)
def select_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'select', workspace)
def remove_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'delete', workspace)
def build_plan(command, project_path, variables_args, state_file, targets, state, plan_path=None):
if plan_path is None:
f, plan_path = tempfile.mkstemp(suffix='.tfplan')
plan_command = [command[0], 'plan', '-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path]
for t in (module.params.get('targets') or []):
plan_command.extend(['-target', t])
plan_command.extend(_state_args(state_file))
rc, out, err = module.run_command(plan_command + variables_args, cwd=project_path, use_unsafe_shell=True)
if rc == 0:
# no changes
return plan_path, False, out, err, plan_command if state == 'planned' else command
elif rc == 1:
# failure to plan
module.fail_json(msg='Terraform plan could not be created\r\nSTDOUT: {0}\r\n\r\nSTDERR: {1}'.format(out, err))
elif rc == 2:
# changes, but successful
return plan_path, True, out, err, plan_command if state == 'planned' else command
module.fail_json(msg='Terraform plan failed with unexpected exit code {0}. \r\nSTDOUT: {1}\r\n\r\nSTDERR: {2}'.format(rc, out, err))
def main():
global module
module = AnsibleModule(
argument_spec=dict(
project_path=dict(required=True, type='path'),
binary_path=dict(type='path'),
workspace=dict(required=False, type='str', default='default'),
purge_workspace=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent', 'planned']),
variables=dict(type='dict'),
variables_file=dict(type='path'),
plan_file=dict(type='path'),
state_file=dict(type='path'),
targets=dict(type='list', default=[]),
lock=dict(type='bool', default=True),
lock_timeout=dict(type='int',),
force_init=dict(type='bool', default=False),
backend_config=dict(type='dict', default=None),
),
required_if=[('state', 'planned', ['plan_file'])],
supports_check_mode=True,
)
project_path = module.params.get('project_path')
bin_path = module.params.get('binary_path')
workspace = module.params.get('workspace')
purge_workspace = module.params.get('purge_workspace')
state = module.params.get('state')
variables = module.params.get('variables') or {}
variables_file = module.params.get('variables_file')
plan_file = module.params.get('plan_file')
state_file = module.params.get('state_file')
force_init = module.params.get('force_init')
backend_config = module.params.get('backend_config')
if bin_path is not None:
command = [bin_path]
else:
command = [module.get_bin_path('terraform', required=True)]
if force_init:
init_plugins(command[0], project_path, backend_config)
workspace_ctx = get_workspace_context(command[0], project_path)
if workspace_ctx["current"] != workspace:
if workspace not in workspace_ctx["all"]:
create_workspace(command[0], project_path, workspace)
else:
select_workspace(command[0], project_path, workspace)
if state == 'present':
command.extend(APPLY_ARGS)
elif state == 'absent':
command.extend(DESTROY_ARGS)
variables_args = []
for k, v in variables.items():
variables_args.extend([
'-var',
'{0}={1}'.format(k, v)
])
if variables_file:
variables_args.extend(['-var-file', variables_file])
preflight_validation(command[0], project_path, variables_args)
if module.params.get('lock') is not None:
if module.params.get('lock'):
command.append('-lock=true')
else:
command.append('-lock=false')
if module.params.get('lock_timeout') is not None:
command.append('-lock-timeout=%ds' % module.params.get('lock_timeout'))
for t in (module.params.get('targets') or []):
command.extend(['-target', t])
# we aren't sure if this plan will result in changes, so assume yes
needs_application, changed = True, False
if state == 'absent':
command.extend(variables_args)
elif state == 'present' and plan_file:
if any([os.path.isfile(project_path + "/" + plan_file), os.path.isfile(plan_file)]):
command.append(plan_file)
else:
module.fail_json(msg='Could not find plan_file "{0}", check the path and try again.'.format(plan_file))
else:
plan_file, needs_application, out, err, command = build_plan(command, project_path, variables_args, state_file,
module.params.get('targets'), state, plan_file)
command.append(plan_file)
out, err = '', ''
if needs_application and not module.check_mode and not state == 'planned':
rc, out, err = module.run_command(command, cwd=project_path)
# checks out to decide if changes were made during execution
if '0 added, 0 changed' not in out and not state == "absent" or '0 destroyed' not in out:
changed = True
if rc != 0:
module.fail_json(
msg="Failure when executing Terraform command. Exited {0}.\nstdout: {1}\nstderr: {2}".format(rc, out, err),
command=' '.join(command)
)
outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file)
rc, outputs_text, outputs_err = module.run_command(outputs_command, cwd=project_path)
if rc == 1:
module.warn("Could not get Terraform outputs. This usually means none have been defined.\nstdout: {0}\nstderr: {1}".format(outputs_text, outputs_err))
outputs = {}
elif rc != 0:
module.fail_json(
msg="Failure when getting Terraform outputs. "
"Exited {0}.\nstdout: {1}\nstderr: {2}".format(rc, outputs_text, outputs_err),
command=' '.join(outputs_command))
else:
outputs = json.loads(outputs_text)
# Restore the Terraform workspace found when running the module
if workspace_ctx["current"] != workspace:
select_workspace(command[0], project_path, workspace_ctx["current"])
if state == 'absent' and workspace != 'default' and purge_workspace is True:
remove_workspace(command[0], project_path, workspace)
module.exit_json(changed=changed, state=state, workspace=workspace, outputs=outputs, stdout=out, stderr=err, command=' '.join(command))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,479 |
AWS `lambda_info` module not returning any details
|
##### SUMMARY
I'm attempting to use the lambda_info module to fetch function details, however getting an error AttributeError: 'module' object has no attribute 'all_details'
error is pointing to this line https://github.com/ansible/ansible/blob/stable-2.9/lib/ansible/modules/cloud/amazon/lambda_info.py#L386
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cloud/amazon/lambda_info
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
nothing output
```
```
##### OS / ENVIRONMENT
ubuntu 19.04
##### STEPS TO REPRODUCE
attempting to call `lambda_info` does not return any detail
```yaml
---
- name: "Setup EC2 Keypair"
hosts: localhost
gather_facts: false
vars:
FunctionName: "LambdaFunctionName"
tasks:
- name: "Get Lambda"
shell: aws lambda get-function --function-name "{{ FunctionName }}" --region "ap-southeast-2"
- name: "Get Lambda Function Details"
lambda_info:
region: "ap-southeast-2"
name: "{{FunctionName}}"
register: lambda_function_output
- name: "Show Lambda Information"
debug:
msg: "{{ lambda_function_output }}"
```
##### EXPECTED RESULTS
I expect a object containing the lambda definition,
##### ACTUAL RESULTS
the shell call to aws works as expected, however the lambda_info module does not
```
ansible-playbook 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: main.yaml ***************************************************************************************************************************************************************************************************************
Positional arguments: main.yaml
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
forks: 5
tags: (u'all',)
extra_vars: (u'envtag=dev',)
verbosity: 4
ssh_common_args: -o StrictHostKeyChecking=no
connection: smart
timeout: 10
1 plays in main.yaml
PLAY [Setup EC2 Keypair] **********************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Get Lambda] *****************************************************************************************************************************************************************************************************************
task path: /home/ubuntu/environment/ConnectEventStream/ansible/main.yaml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799 `" && echo ansible-tmp-1573016645.99-133788815155799="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27933Azyogt/tmp5CUgJa TO /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/ /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"cmd": "aws lambda get-function --function-name \"ConnectTraceRecordsStream\" --region \"ap-southeast-2\"",
"delta": "0:00:01.698189",
"end": "2019-11-06 05:04:07.964856",
"invocation": {
"module_args": {
"_raw_params": "aws lambda get-function --function-name \"ConnectTraceRecordsStream\" --region \"ap-southeast-2\"",
... REDACTED...
"stdout_lines": [
"{",
" \"Configuration\": {",
" \"FunctionName\": \"ConnectTraceRecordsStream\",",
... REDACTED...
}
TASK [Get Lambda Function Details] ************************************************************************************************************************************************************************************************
task path: /home/ubuntu/environment/ConnectEventStream/ansible/main.yaml:20
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723 `" && echo ansible-tmp-1573016648.01-175629843213723="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/lambda_info.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27933Azyogt/tmpqUCnmY TO /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/ /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 102, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.lambda_info', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python2.7/runpy.py", line 192, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py", line 398, in <module>
File "/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py", line 386, in main
AttributeError: 'module' object has no attribute 'all_details'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.lambda_info', init_globals=None, run_name='__main__', alter_sys=False)\n File \"/usr/lib/python2.7/runpy.py\", line 192, in run_module\n fname, loader, pkg_name)\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py\", line 398, in <module>\n File \"/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py\", line 386, in main\nAttributeError: 'module' object has no attribute 'all_details'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! [email protected] deploy: `cd ansible && ansible-playbook -e "envtag=dev" main.yaml -vvvv --ssh-common-args='-o StrictHostKeyChecking=no'`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] deploy script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/ubuntu/.npm/_logs/2019-11-06T05_04_08_749Z-debug.log
```
|
https://github.com/ansible/ansible/issues/64479
|
https://github.com/ansible/ansible/pull/64548
|
a0e6bf366e107915a256e8f79c0a15ad7f2be228
|
6fa070e824ab4bcd694f78edba5ff2261a6384e8
| 2019-11-06T05:07:00Z |
python
| 2019-12-22T01:29:05Z |
lib/ansible/modules/cloud/amazon/lambda_info.py
|
#!/usr/bin/python
# This file is part of Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: lambda_info
short_description: Gathers AWS Lambda function details
description:
- Gathers various details related to Lambda functions, including aliases, versions and event source mappings.
- Use module M(lambda) to manage the lambda function itself, M(lambda_alias) to manage function aliases and
M(lambda_event) to manage lambda event source mappings.
version_added: "2.9"
options:
query:
description:
- Specifies the resource type for which to gather information. Leave blank to retrieve all information.
choices: [ "aliases", "all", "config", "mappings", "policy", "versions" ]
default: "all"
type: str
function_name:
description:
- The name of the lambda function for which information is requested.
aliases: [ "function", "name"]
type: str
event_source_arn:
description:
- When I(query=mappings), this is the Amazon Resource Name (ARN) of the Amazon Kinesis or DynamoDB stream.
type: str
author: Pierre Jodouin (@pjodouin)
requirements:
- boto3
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
---
# Simple example of listing all info for a function
- name: List all for a specific function
lambda_info:
query: all
function_name: myFunction
register: my_function_details
# List all versions of a function
- name: List function versions
lambda_info:
query: versions
function_name: myFunction
register: my_function_versions
# List all lambda function versions
- name: List all function
lambda_info:
query: all
max_items: 20
register: output
- name: show Lambda information
debug:
msg: "{{ output['function'] }}"
'''
RETURN = '''
---
function:
description: lambda function list
returned: success
type: dict
function.TheName:
description: lambda function information, including event, mapping, and version information
returned: success
type: dict
'''
from ansible.module_utils.aws.core import AnsibleAWSModule
from ansible.module_utils.ec2 import camel_dict_to_snake_dict, get_aws_connection_info, boto3_conn
import json
import datetime
import sys
import re
try:
from botocore.exceptions import ClientError
except ImportError:
pass # protected by AnsibleAWSModule
def fix_return(node):
"""
fixup returned dictionary
:param node:
:return:
"""
if isinstance(node, datetime.datetime):
node_value = str(node)
elif isinstance(node, list):
node_value = [fix_return(item) for item in node]
elif isinstance(node, dict):
node_value = dict([(item, fix_return(node[item])) for item in node.keys()])
else:
node_value = node
return node_value
def alias_details(client, module):
"""
Returns list of aliases for a specified function.
:param client: AWS API client reference (boto3)
:param module: Ansible module reference
:return dict:
"""
lambda_info = dict()
function_name = module.params.get('function_name')
if function_name:
params = dict()
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
try:
lambda_info.update(aliases=client.list_aliases(FunctionName=function_name, **params)['Aliases'])
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
lambda_info.update(aliases=[])
else:
module.fail_json_aws(e, msg="Trying to get aliases")
else:
module.fail_json(msg='Parameter function_name required for query=aliases.')
return {function_name: camel_dict_to_snake_dict(lambda_info)}
def all_details(client, module):
"""
Returns all lambda related facts.
:param client: AWS API client reference (boto3)
:param module: Ansible module reference
:return dict:
"""
if module.params.get('max_items') or module.params.get('next_marker'):
module.fail_json(msg='Cannot specify max_items nor next_marker for query=all.')
lambda_info = dict()
function_name = module.params.get('function_name')
if function_name:
lambda_info[function_name] = {}
lambda_info[function_name].update(config_details(client, module)[function_name])
lambda_info[function_name].update(alias_details(client, module)[function_name])
lambda_info[function_name].update(policy_details(client, module)[function_name])
lambda_info[function_name].update(version_details(client, module)[function_name])
lambda_info[function_name].update(mapping_details(client, module)[function_name])
else:
lambda_info.update(config_details(client, module))
return lambda_info
def config_details(client, module):
"""
Returns configuration details for one or all lambda functions.
:param client: AWS API client reference (boto3)
:param module: Ansible module reference
:return dict:
"""
lambda_info = dict()
function_name = module.params.get('function_name')
if function_name:
try:
lambda_info.update(client.get_function_configuration(FunctionName=function_name))
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
lambda_info.update(function={})
else:
module.fail_json_aws(e, msg="Trying to get {0} configuration".format(function_name))
else:
params = dict()
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
try:
lambda_info.update(function_list=client.list_functions(**params)['Functions'])
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
lambda_info.update(function_list=[])
else:
module.fail_json_aws(e, msg="Trying to get function list")
functions = dict()
for func in lambda_info.pop('function_list', []):
functions[func['FunctionName']] = camel_dict_to_snake_dict(func)
return functions
return {function_name: camel_dict_to_snake_dict(lambda_info)}
def mapping_details(client, module):
"""
Returns all lambda event source mappings.
:param client: AWS API client reference (boto3)
:param module: Ansible module reference
:return dict:
"""
lambda_info = dict()
params = dict()
function_name = module.params.get('function_name')
if function_name:
params['FunctionName'] = module.params.get('function_name')
if module.params.get('event_source_arn'):
params['EventSourceArn'] = module.params.get('event_source_arn')
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
try:
lambda_info.update(mappings=client.list_event_source_mappings(**params)['EventSourceMappings'])
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
lambda_info.update(mappings=[])
else:
module.fail_json_aws(e, msg="Trying to get source event mappings")
if function_name:
return {function_name: camel_dict_to_snake_dict(lambda_info)}
return camel_dict_to_snake_dict(lambda_info)
def policy_details(client, module):
"""
Returns policy attached to a lambda function.
:param client: AWS API client reference (boto3)
:param module: Ansible module reference
:return dict:
"""
if module.params.get('max_items') or module.params.get('next_marker'):
module.fail_json(msg='Cannot specify max_items nor next_marker for query=policy.')
lambda_info = dict()
function_name = module.params.get('function_name')
if function_name:
try:
# get_policy returns a JSON string so must convert to dict before reassigning to its key
lambda_info.update(policy=json.loads(client.get_policy(FunctionName=function_name)['Policy']))
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
lambda_info.update(policy={})
else:
module.fail_json_aws(e, msg="Trying to get {0} policy".format(function_name))
else:
module.fail_json(msg='Parameter function_name required for query=policy.')
return {function_name: camel_dict_to_snake_dict(lambda_info)}
def version_details(client, module):
"""
Returns all lambda function versions.
:param client: AWS API client reference (boto3)
:param module: Ansible module reference
:return dict:
"""
lambda_info = dict()
function_name = module.params.get('function_name')
if function_name:
params = dict()
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
try:
lambda_info.update(versions=client.list_versions_by_function(FunctionName=function_name, **params)['Versions'])
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
lambda_info.update(versions=[])
else:
module.fail_json_aws(e, msg="Trying to get {0} versions".format(function_name))
else:
module.fail_json(msg='Parameter function_name required for query=versions.')
return {function_name: camel_dict_to_snake_dict(lambda_info)}
def main():
"""
Main entry point.
:return dict: ansible facts
"""
argument_spec = dict(
function_name=dict(required=False, default=None, aliases=['function', 'name']),
query=dict(required=False, choices=['aliases', 'all', 'config', 'mappings', 'policy', 'versions'], default='all'),
event_source_arn=dict(required=False, default=None)
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[],
required_together=[]
)
# validate function_name if present
function_name = module.params['function_name']
if function_name:
if not re.search(r"^[\w\-:]+$", function_name):
module.fail_json(
msg='Function name {0} is invalid. Names must contain only alphanumeric characters and hyphens.'.format(function_name)
)
if len(function_name) > 64:
module.fail_json(msg='Function name "{0}" exceeds 64 character limit'.format(function_name))
try:
region, endpoint, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)
aws_connect_kwargs.update(dict(region=region,
endpoint=endpoint,
conn_type='client',
resource='lambda'
))
client = boto3_conn(module, **aws_connect_kwargs)
except ClientError as e:
module.fail_json_aws(e, "trying to set up boto connection")
this_module = sys.modules[__name__]
invocations = dict(
aliases='alias_details',
all='all_details',
config='config_details',
mappings='mapping_details',
policy='policy_details',
versions='version_details',
)
this_module_function = getattr(this_module, invocations[module.params['query']])
all_facts = fix_return(this_module_function(client, module))
results = dict(function=all_facts, changed=False)
if module.check_mode:
results['msg'] = 'Check mode set but ignored for fact gathering only.'
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,479 |
AWS `lambda_info` module not returning any details
|
##### SUMMARY
I'm attempting to use the lambda_info module to fetch function details, however getting an error AttributeError: 'module' object has no attribute 'all_details'
error is pointing to this line https://github.com/ansible/ansible/blob/stable-2.9/lib/ansible/modules/cloud/amazon/lambda_info.py#L386
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cloud/amazon/lambda_info
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
nothing output
```
```
##### OS / ENVIRONMENT
ubuntu 19.04
##### STEPS TO REPRODUCE
attempting to call `lambda_info` does not return any detail
```yaml
---
- name: "Setup EC2 Keypair"
hosts: localhost
gather_facts: false
vars:
FunctionName: "LambdaFunctionName"
tasks:
- name: "Get Lambda"
shell: aws lambda get-function --function-name "{{ FunctionName }}" --region "ap-southeast-2"
- name: "Get Lambda Function Details"
lambda_info:
region: "ap-southeast-2"
name: "{{FunctionName}}"
register: lambda_function_output
- name: "Show Lambda Information"
debug:
msg: "{{ lambda_function_output }}"
```
##### EXPECTED RESULTS
I expect a object containing the lambda definition,
##### ACTUAL RESULTS
the shell call to aws works as expected, however the lambda_info module does not
```
ansible-playbook 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: main.yaml ***************************************************************************************************************************************************************************************************************
Positional arguments: main.yaml
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
forks: 5
tags: (u'all',)
extra_vars: (u'envtag=dev',)
verbosity: 4
ssh_common_args: -o StrictHostKeyChecking=no
connection: smart
timeout: 10
1 plays in main.yaml
PLAY [Setup EC2 Keypair] **********************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Get Lambda] *****************************************************************************************************************************************************************************************************************
task path: /home/ubuntu/environment/ConnectEventStream/ansible/main.yaml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799 `" && echo ansible-tmp-1573016645.99-133788815155799="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27933Azyogt/tmp5CUgJa TO /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/ /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"cmd": "aws lambda get-function --function-name \"ConnectTraceRecordsStream\" --region \"ap-southeast-2\"",
"delta": "0:00:01.698189",
"end": "2019-11-06 05:04:07.964856",
"invocation": {
"module_args": {
"_raw_params": "aws lambda get-function --function-name \"ConnectTraceRecordsStream\" --region \"ap-southeast-2\"",
... REDACTED...
"stdout_lines": [
"{",
" \"Configuration\": {",
" \"FunctionName\": \"ConnectTraceRecordsStream\",",
... REDACTED...
}
TASK [Get Lambda Function Details] ************************************************************************************************************************************************************************************************
task path: /home/ubuntu/environment/ConnectEventStream/ansible/main.yaml:20
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723 `" && echo ansible-tmp-1573016648.01-175629843213723="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/lambda_info.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27933Azyogt/tmpqUCnmY TO /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/ /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 102, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.lambda_info', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python2.7/runpy.py", line 192, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py", line 398, in <module>
File "/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py", line 386, in main
AttributeError: 'module' object has no attribute 'all_details'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.lambda_info', init_globals=None, run_name='__main__', alter_sys=False)\n File \"/usr/lib/python2.7/runpy.py\", line 192, in run_module\n fname, loader, pkg_name)\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py\", line 398, in <module>\n File \"/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py\", line 386, in main\nAttributeError: 'module' object has no attribute 'all_details'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! [email protected] deploy: `cd ansible && ansible-playbook -e "envtag=dev" main.yaml -vvvv --ssh-common-args='-o StrictHostKeyChecking=no'`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] deploy script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/ubuntu/.npm/_logs/2019-11-06T05_04_08_749Z-debug.log
```
|
https://github.com/ansible/ansible/issues/64479
|
https://github.com/ansible/ansible/pull/64548
|
a0e6bf366e107915a256e8f79c0a15ad7f2be228
|
6fa070e824ab4bcd694f78edba5ff2261a6384e8
| 2019-11-06T05:07:00Z |
python
| 2019-12-22T01:29:05Z |
test/integration/targets/aws_lambda/aliases
|
cloud/aws
shippable/aws/group2
execute_lambda
lambda
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,479 |
AWS `lambda_info` module not returning any details
|
##### SUMMARY
I'm attempting to use the lambda_info module to fetch function details, however getting an error AttributeError: 'module' object has no attribute 'all_details'
error is pointing to this line https://github.com/ansible/ansible/blob/stable-2.9/lib/ansible/modules/cloud/amazon/lambda_info.py#L386
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cloud/amazon/lambda_info
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
nothing output
```
```
##### OS / ENVIRONMENT
ubuntu 19.04
##### STEPS TO REPRODUCE
attempting to call `lambda_info` does not return any detail
```yaml
---
- name: "Setup EC2 Keypair"
hosts: localhost
gather_facts: false
vars:
FunctionName: "LambdaFunctionName"
tasks:
- name: "Get Lambda"
shell: aws lambda get-function --function-name "{{ FunctionName }}" --region "ap-southeast-2"
- name: "Get Lambda Function Details"
lambda_info:
region: "ap-southeast-2"
name: "{{FunctionName}}"
register: lambda_function_output
- name: "Show Lambda Information"
debug:
msg: "{{ lambda_function_output }}"
```
##### EXPECTED RESULTS
I expect a object containing the lambda definition,
##### ACTUAL RESULTS
the shell call to aws works as expected, however the lambda_info module does not
```
ansible-playbook 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: main.yaml ***************************************************************************************************************************************************************************************************************
Positional arguments: main.yaml
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
forks: 5
tags: (u'all',)
extra_vars: (u'envtag=dev',)
verbosity: 4
ssh_common_args: -o StrictHostKeyChecking=no
connection: smart
timeout: 10
1 plays in main.yaml
PLAY [Setup EC2 Keypair] **********************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Get Lambda] *****************************************************************************************************************************************************************************************************************
task path: /home/ubuntu/environment/ConnectEventStream/ansible/main.yaml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799 `" && echo ansible-tmp-1573016645.99-133788815155799="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27933Azyogt/tmp5CUgJa TO /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/ /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1573016645.99-133788815155799/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"cmd": "aws lambda get-function --function-name \"ConnectTraceRecordsStream\" --region \"ap-southeast-2\"",
"delta": "0:00:01.698189",
"end": "2019-11-06 05:04:07.964856",
"invocation": {
"module_args": {
"_raw_params": "aws lambda get-function --function-name \"ConnectTraceRecordsStream\" --region \"ap-southeast-2\"",
... REDACTED...
"stdout_lines": [
"{",
" \"Configuration\": {",
" \"FunctionName\": \"ConnectTraceRecordsStream\",",
... REDACTED...
}
TASK [Get Lambda Function Details] ************************************************************************************************************************************************************************************************
task path: /home/ubuntu/environment/ConnectEventStream/ansible/main.yaml:20
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723 `" && echo ansible-tmp-1573016648.01-175629843213723="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/lambda_info.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27933Azyogt/tmpqUCnmY TO /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/ /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 102, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.lambda_info', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python2.7/runpy.py", line 192, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py", line 398, in <module>
File "/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py", line 386, in main
AttributeError: 'module' object has no attribute 'all_details'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1573016648.01-175629843213723/AnsiballZ_lambda_info.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.cloud.amazon.lambda_info', init_globals=None, run_name='__main__', alter_sys=False)\n File \"/usr/lib/python2.7/runpy.py\", line 192, in run_module\n fname, loader, pkg_name)\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py\", line 398, in <module>\n File \"/tmp/ansible_lambda_info_payload_yCkK_u/ansible_lambda_info_payload.zip/ansible/modules/cloud/amazon/lambda_info.py\", line 386, in main\nAttributeError: 'module' object has no attribute 'all_details'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! [email protected] deploy: `cd ansible && ansible-playbook -e "envtag=dev" main.yaml -vvvv --ssh-common-args='-o StrictHostKeyChecking=no'`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] deploy script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/ubuntu/.npm/_logs/2019-11-06T05_04_08_749Z-debug.log
```
|
https://github.com/ansible/ansible/issues/64479
|
https://github.com/ansible/ansible/pull/64548
|
a0e6bf366e107915a256e8f79c0a15ad7f2be228
|
6fa070e824ab4bcd694f78edba5ff2261a6384e8
| 2019-11-06T05:07:00Z |
python
| 2019-12-22T01:29:05Z |
test/integration/targets/aws_lambda/tasks/main.yml
|
---
# tasks file for aws_lambda test
- name: set connection information for AWS modules and run tests
module_defaults:
group/aws:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
# ============================================================
- name: test with no parameters
lambda:
register: result
ignore_errors: true
- name: assert failure when called with no parameters
assert:
that:
- 'result.failed'
- 'result.msg.startswith("missing required arguments: name")'
# ============================================================
- name: test with no parameters except state absent
lambda:
state: absent
register: result
ignore_errors: true
- name: assert failure when called with no parameters
assert:
that:
- 'result.failed'
- 'result.msg.startswith("missing required arguments: name")'
# ============================================================
- name: test with no role or handler
lambda:
name: ansible-testing-fake-should-not-be-created
runtime: "python2.7"
register: result
ignore_errors: true
- name: assert failure when called with no parameters
assert:
that:
- 'result.failed'
- 'result.msg.startswith("state is present but all of the following are missing: handler")'
# ============================================================
- name: test with all module required variables but no region
lambda:
name: ansible-testing-fake-should-not-be-created
runtime: "python2.7"
handler: "no-handler"
role: "arn:fake-role-doesnt-exist"
region:
register: result
ignore_errors: true
- name: assert failure when called with only 'name'
assert:
that:
- 'result.failed'
- 'result.msg == "region must be specified"'
# ============================================================
- name: test with all module required variables, no region and all possible variables set to blank
lambda:
name: ansible-testing-fake-should-not-be-created
state: present
runtime: "python2.7"
role: arn:fake-role-doesnt-exist
handler:
s3_bucket:
s3_key:
s3_object_version:
description:
vpc_subnet_ids:
vpc_security_group_ids:
environment_variables:
dead_letter_arn:
region:
register: result
ignore_errors: true
- name: assert failure when called with only 'name'
assert:
that:
- 'result.failed'
- 'result.msg == "region must be specified"'
# ============================================================
# direct zip file upload
- name: move lambda into place for archive module
copy:
src: "mini_lambda.py"
dest: "{{output_dir}}/mini_lambda.py"
- name: bundle lambda into a zip
archive:
format: zip
path: "{{output_dir}}/mini_lambda.py"
dest: "{{output_dir}}/mini_lambda.zip"
register: zip_res
- name: test state=present - upload the lambda
lambda:
name: "{{lambda_function_name}}"
runtime: "python2.7"
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
register: result
- name: assert lambda upload succeeded
assert:
that:
- result is not failed
- result.configuration.tracing_config.mode == "PassThrough"
- name: test lambda works
execute_lambda:
name: "{{lambda_function_name}}"
payload:
name: "Mr Ansible Tests"
register: result
- name: assert lambda manages to respond as expected
assert:
that:
- 'result is not failed'
- 'result.result.output.message == "hello Mr Ansible Tests"'
- name: test lambda config updates
lambda:
name: "{{lambda_function_name}}"
runtime: "nodejs10.x"
tracing_mode: 'Active'
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
register: update_result
- name: assert that update succeeded
assert:
that:
- update_result is not failed
- update_result.changed == True
- update_result.configuration.runtime == 'nodejs10.x'
- update_result.configuration.tracing_config.mode == 'Active'
- name: test no changes are made with the same parameters
lambda:
name: "{{lambda_function_name}}"
runtime: "nodejs10.x"
tracing_mode: 'Active'
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
register: update_result
- name: assert that update succeeded
assert:
that:
- update_result is not failed
- update_result.changed == False
- update_result.configuration.runtime == 'nodejs10.x'
- update_result.configuration.tracing_config.mode == 'Active'
- name: reset config updates for the following tests
lambda:
name: "{{lambda_function_name}}"
runtime: "python2.7"
tracing_mode: 'PassThrough'
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
register: result
- name: assert that reset succeeded
assert:
that:
- result is not failed
- result.changed == True
- result.configuration.runtime == 'python2.7'
- result.configuration.tracing_config.mode == 'PassThrough'
# ============================================================
- name: test state=present with security group but no vpc
lambda:
name: "{{lambda_function_name}}"
runtime: "python2.7"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
handler:
description:
vpc_subnet_ids:
vpc_security_group_ids: sg-FA6E
environment_variables:
dead_letter_arn:
register: result
ignore_errors: true
- name: assert lambda fails with proper message
assert:
that:
- 'result is failed'
- 'result.msg != "MODULE FAILURE"'
- 'result.changed == False'
- '"requires at least one security group and one subnet" in result.msg'
# ============================================================
- name: test state=present with all nullable variables explicitly set to null
lambda:
name: "{{lambda_function_name}}"
runtime: "python2.7"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
handler: "mini_lambda.handler"
# These are not allowed because of mutually exclusive.
# s3_bucket:
# s3_key:
# s3_object_version:
description:
vpc_subnet_ids:
vpc_security_group_ids:
environment_variables:
dead_letter_arn:
register: result
- name: assert lambda remains as before
assert:
that:
- 'result is not failed'
- 'result.changed == False'
# ============================================================
- name: test putting an environment variable changes lambda
lambda:
name: "{{lambda_function_name}}"
runtime: "python2.7"
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
environment_variables:
EXTRA_MESSAGE: "I think you are great!!"
register: result
- name: assert lambda upload succeeded
assert:
that:
- 'result is not failed'
- 'result.changed == True'
- name: test lambda works
execute_lambda:
name: "{{lambda_function_name}}"
payload:
name: "Mr Ansible Tests"
security_token: '{{security_token}}'
register: result
- name: assert lambda manages to respond as expected
assert:
that:
- 'result is not failed'
- 'result.result.output.message == "hello Mr Ansible Tests. I think you are great!!"'
# ============================================================
- name: test state=present triggering a network exception due to bad url
lambda:
name: "{{lambda_function_name}}"
runtime: "python2.7"
role: "ansible_lambda_role"
ec2_url: https://noexist.example.com
ec2_region: '{{ec2_region}}'
ec2_access_key: 'iamnotreallyanaccesskey'
ec2_secret_key: 'thisisabadsecretkey'
security_token: 'andthisisabadsecuritytoken'
zip_file: "{{zip_res.dest}}"
register: result
ignore_errors: true
- name: assert lambda manages to respond as expected
assert:
that:
- 'result is failed'
- 'result.changed == False'
# ============================================================
- name: test state=absent (expect changed=False)
lambda:
name: "{{lambda_function_name}}"
state: absent
register: result
- name: assert state=absent
assert:
that:
- 'result is not failed'
- 'result.changed == True'
# ============================================================
# parallel lambda creation
- name: parallel lambda creation 1/4
lambda:
name: "{{lambda_function_name}}_1"
runtime: "python2.7"
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
async: 1000
register: async_1
- name: parallel lambda creation 2/4
lambda:
name: "{{lambda_function_name}}_2"
runtime: "python2.7"
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
async: 1000
register: async_2
- name: parallel lambda creation 3/4
lambda:
name: "{{lambda_function_name}}_3"
runtime: "python2.7"
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
async: 1000
register: async_3
- name: parallel lambda creation 4/4
lambda:
name: "{{lambda_function_name}}_4"
runtime: "python2.7"
handler: "mini_lambda.handler"
role: "ansible_lambda_role"
zip_file: "{{zip_res.dest}}"
register: result
- name: assert lambda manages to respond as expected
assert:
that:
- 'result is not failed'
- name: wait for async job 1
async_status: jid={{ async_1.ansible_job_id }}
register: job_result
until: job_result is finished
retries: 30
- name: wait for async job 2
async_status: jid={{ async_1.ansible_job_id }}
register: job_result
until: job_result is finished
retries: 30
- name: wait for async job 3
async_status: jid={{ async_3.ansible_job_id }}
register: job_result
until: job_result is finished
retries: 30
- name: parallel lambda deletion 1/4
lambda:
name: "{{lambda_function_name}}_1"
state: absent
zip_file: "{{zip_res.dest}}"
async: 1000
register: async_1
- name: parallel lambda deletion 2/4
lambda:
name: "{{lambda_function_name}}_2"
state: absent
zip_file: "{{zip_res.dest}}"
async: 1000
register: async_2
- name: parallel lambda deletion 3/4
lambda:
name: "{{lambda_function_name}}_3"
state: absent
zip_file: "{{zip_res.dest}}"
async: 1000
register: async_3
- name: parallel lambda deletion 4/4
lambda:
name: "{{lambda_function_name}}_4"
state: absent
zip_file: "{{zip_res.dest}}"
register: result
- name: assert lambda creation has succeeded
assert:
that:
- 'result is not failed'
- name: wait for async job 1
async_status: jid={{ async_1.ansible_job_id }}
register: job_result
until: job_result is finished
retries: 30
- name: wait for async job 2
async_status: jid={{ async_1.ansible_job_id }}
register: job_result
until: job_result is finished
retries: 30
- name: wait for async job 3
async_status: jid={{ async_3.ansible_job_id }}
register: job_result
until: job_result is finished
retries: 30
# ============================================================
always:
- name: ensure function is absent at end of test
lambda:
name: "{{lambda_function_name}}"
state: absent
ignore_errors: true
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,682 |
Pacman module throws "get_name: fail to retrieve package name from pacman output" for packages that have been already installed
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Pacman module throws "get_name: fail to retrieve package name from pacman output" for packages that have been already installed. This error completely prevents execution of playbooks.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Pacman
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tomek/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Arch Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Execute any playbook on the Arch Linux OS with using Pacman module for package that has been installed previously.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Pacman module shouldn't throw an error and break execution of playbook.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Pacman module throws an error and breaks execution of playbook.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64682
|
https://github.com/ansible/ansible/pull/65878
|
22d93d9496e08acac5a0063e641900efe22ef013
|
cd0e038b746f426003ba7177f5ff4882db9dfe9b
| 2019-11-11T20:38:32Z |
python
| 2019-12-23T03:41:39Z |
lib/ansible/modules/packaging/os/pacman.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Afterburn <https://github.com/afterburn>
# Copyright: (c) 2013, Aaron Bull Schaefer <[email protected]>
# Copyright: (c) 2015, Indrajit Raychaudhuri <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: pacman
short_description: Manage packages with I(pacman)
description:
- Manage packages with the I(pacman) package manager, which is used by Arch Linux and its variants.
version_added: "1.0"
author:
- Indrajit Raychaudhuri (@indrajitr)
- Aaron Bull Schaefer (@elasticdog) <[email protected]>
- Maxime de Roucy (@tchernomax)
options:
name:
description:
- Name or list of names of the package(s) or file(s) to install, upgrade, or remove.
Can't be used in combination with C(upgrade).
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Desired state of the package.
default: present
choices: [ absent, latest, present ]
force:
description:
- When removing package, force remove package, without any checks.
Same as `extra_args="--nodeps --nodeps"`.
When update_cache, force redownload repo databases.
Same as `update_cache_extra_args="--refresh --refresh"`.
default: no
type: bool
version_added: "2.0"
extra_args:
description:
- Additional option to pass to pacman when enforcing C(state).
default:
version_added: "2.8"
update_cache:
description:
- Whether or not to refresh the master package lists.
- This can be run as part of a package installation or as a separate step.
default: no
type: bool
aliases: [ update-cache ]
update_cache_extra_args:
description:
- Additional option to pass to pacman when enforcing C(update_cache).
default:
version_added: "2.8"
upgrade:
description:
- Whether or not to upgrade the whole system.
Can't be used in combination with C(name).
default: no
type: bool
version_added: "2.0"
upgrade_extra_args:
description:
- Additional option to pass to pacman when enforcing C(upgrade).
default:
version_added: "2.8"
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
'''
RETURN = '''
packages:
description: a list of packages that have been changed
returned: when upgrade is set to yes
type: list
sample: [ package, other-package ]
'''
EXAMPLES = '''
- name: Install package foo from repo
pacman:
name: foo
state: present
- name: Install package bar from file
pacman:
name: ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Install package foo from repo and bar from file
pacman:
name:
- foo
- ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Upgrade package foo
pacman:
name: foo
state: latest
update_cache: yes
- name: Remove packages foo and bar
pacman:
name:
- foo
- bar
state: absent
- name: Recursively remove package baz
pacman:
name: baz
state: absent
extra_args: --recursive
- name: Run the equivalent of "pacman -Sy" as a separate step
pacman:
update_cache: yes
- name: Run the equivalent of "pacman -Su" as a separate step
pacman:
upgrade: yes
- name: Run the equivalent of "pacman -Syu" as a separate step
pacman:
update_cache: yes
upgrade: yes
- name: Run the equivalent of "pacman -Rdd", force remove package baz
pacman:
name: baz
state: absent
force: yes
'''
import re
from ansible.module_utils.basic import AnsibleModule
def get_version(pacman_output):
"""Take pacman -Qi or pacman -Si output and get the Version"""
lines = pacman_output.split('\n')
for line in lines:
if line.startswith('Version '):
return line.split(':')[1].strip()
return None
def get_name(module, pacman_output):
"""Take pacman -Qi or pacman -Si output and get the package name"""
lines = pacman_output.split('\n')
for line in lines:
if line.startswith('Name '):
return line.split(':')[1].strip()
module.fail_json(msg="get_name: fail to retrieve package name from pacman output")
def query_package(module, pacman_path, name, state="present"):
"""Query the package status in both the local system and the repository. Returns a boolean to indicate if the package is installed, a second
boolean to indicate if the package is up-to-date and a third boolean to indicate whether online information were available
"""
if state == "present":
lcmd = "%s --query --info %s" % (pacman_path, name)
lrc, lstdout, lstderr = module.run_command(lcmd, check_rc=False)
if lrc != 0:
# package is not installed locally
return False, False, False
else:
# a non-zero exit code doesn't always mean the package is installed
# for example, if the package name queried is "provided" by another package
installed_name = get_name(module, lstdout)
if installed_name != name:
return False, False, False
# get the version installed locally (if any)
lversion = get_version(lstdout)
rcmd = "%s --sync --info %s" % (pacman_path, name)
rrc, rstdout, rstderr = module.run_command(rcmd, check_rc=False)
# get the version in the repository
rversion = get_version(rstdout)
if rrc == 0:
# Return True to indicate that the package is installed locally, and the result of the version number comparison
# to determine if the package is up-to-date.
return True, (lversion == rversion), False
# package is installed but cannot fetch remote Version. Last True stands for the error
return True, True, True
def update_package_db(module, pacman_path):
if module.params['force']:
module.params["update_cache_extra_args"] += " --refresh --refresh"
cmd = "%s --sync --refresh %s" % (pacman_path, module.params["update_cache_extra_args"])
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
return True
else:
module.fail_json(msg="could not update package db")
def upgrade(module, pacman_path):
cmdupgrade = "%s --sync --sysupgrade --quiet --noconfirm %s" % (pacman_path, module.params["upgrade_extra_args"])
cmdneedrefresh = "%s --query --upgrades" % (pacman_path)
rc, stdout, stderr = module.run_command(cmdneedrefresh, check_rc=False)
data = stdout.split('\n')
data.remove('')
packages = []
diff = {
'before': '',
'after': '',
}
if rc == 0:
# Match lines of `pacman -Qu` output of the form:
# (package name) (before version-release) -> (after version-release)
# e.g., "ansible 2.7.1-1 -> 2.7.2-1"
regex = re.compile(r'([\w+\-.@]+) (\S+-\S+) -> (\S+-\S+)')
for p in data:
m = regex.search(p)
packages.append(m.group(1))
if module._diff:
diff['before'] += "%s-%s\n" % (m.group(1), m.group(2))
diff['after'] += "%s-%s\n" % (m.group(1), m.group(3))
if module.check_mode:
module.exit_json(changed=True, msg="%s package(s) would be upgraded" % (len(data)), packages=packages, diff=diff)
rc, stdout, stderr = module.run_command(cmdupgrade, check_rc=False)
if rc == 0:
module.exit_json(changed=True, msg='System upgraded', packages=packages, diff=diff)
else:
module.fail_json(msg="Could not upgrade")
else:
module.exit_json(changed=False, msg='Nothing to upgrade', packages=packages)
def remove_packages(module, pacman_path, packages):
data = []
diff = {
'before': '',
'after': '',
}
if module.params["force"]:
module.params["extra_args"] += " --nodeps --nodeps"
remove_c = 0
# Using a for loop in case of error, we can report the package that failed
for package in packages:
# Query the package first, to see if we even need to remove
installed, updated, unknown = query_package(module, pacman_path, package)
if not installed:
continue
cmd = "%s --remove --noconfirm --noprogressbar %s %s" % (pacman_path, module.params["extra_args"], package)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to remove %s" % (package))
if module._diff:
d = stdout.split('\n')[2].split(' ')[2:]
for i, pkg in enumerate(d):
d[i] = re.sub('-[0-9].*$', '', d[i].split('/')[-1])
diff['before'] += "%s\n" % pkg
data.append('\n'.join(d))
remove_c += 1
if remove_c > 0:
module.exit_json(changed=True, msg="removed %s package(s)" % remove_c, diff=diff)
module.exit_json(changed=False, msg="package(s) already absent")
def install_packages(module, pacman_path, state, packages, package_files):
install_c = 0
package_err = []
message = ""
data = []
diff = {
'before': '',
'after': '',
}
to_install_repos = []
to_install_files = []
for i, package in enumerate(packages):
# if the package is installed and state == present or state == latest and is up-to-date then skip
installed, updated, latestError = query_package(module, pacman_path, package)
if latestError and state == 'latest':
package_err.append(package)
if installed and (state == 'present' or (state == 'latest' and updated)):
continue
if package_files[i]:
to_install_files.append(package_files[i])
else:
to_install_repos.append(package)
if to_install_repos:
cmd = "%s --sync --noconfirm --noprogressbar --needed %s %s" % (pacman_path, module.params["extra_args"], " ".join(to_install_repos))
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_repos), stderr))
data = stdout.split('\n')[3].split(' ')[2:]
data = [i for i in data if i != '']
for i, pkg in enumerate(data):
data[i] = re.sub('-[0-9].*$', '', data[i].split('/')[-1])
if module._diff:
diff['after'] += "%s\n" % pkg
install_c += len(to_install_repos)
if to_install_files:
cmd = "%s --upgrade --noconfirm --noprogressbar --needed %s %s" % (pacman_path, module.params["extra_args"], " ".join(to_install_files))
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_files), stderr))
data = stdout.split('\n')[3].split(' ')[2:]
data = [i for i in data if i != '']
for i, pkg in enumerate(data):
data[i] = re.sub('-[0-9].*$', '', data[i].split('/')[-1])
if module._diff:
diff['after'] += "%s\n" % pkg
install_c += len(to_install_files)
if state == 'latest' and len(package_err) > 0:
message = "But could not ensure 'latest' state for %s package(s) as remote version could not be fetched." % (package_err)
if install_c > 0:
module.exit_json(changed=True, msg="installed %s package(s). %s" % (install_c, message), diff=diff)
module.exit_json(changed=False, msg="package(s) already installed. %s" % (message), diff=diff)
def check_packages(module, pacman_path, packages, state):
would_be_changed = []
diff = {
'before': '',
'after': '',
'before_header': '',
'after_header': ''
}
for package in packages:
installed, updated, unknown = query_package(module, pacman_path, package)
if ((state in ["present", "latest"] and not installed) or
(state == "absent" and installed) or
(state == "latest" and not updated)):
would_be_changed.append(package)
if would_be_changed:
if state == "absent":
state = "removed"
if module._diff and (state == 'removed'):
diff['before_header'] = 'removed'
diff['before'] = '\n'.join(would_be_changed) + '\n'
elif module._diff and ((state == 'present') or (state == 'latest')):
diff['after_header'] = 'installed'
diff['after'] = '\n'.join(would_be_changed) + '\n'
module.exit_json(changed=True, msg="%s package(s) would be %s" % (
len(would_be_changed), state), diff=diff)
else:
module.exit_json(changed=False, msg="package(s) already %s" % state, diff=diff)
def expand_package_groups(module, pacman_path, pkgs):
expanded = []
for pkg in pkgs:
if pkg: # avoid empty strings
cmd = "%s --sync --groups --quiet %s" % (pacman_path, pkg)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
# A group was found matching the name, so expand it
for name in stdout.split('\n'):
name = name.strip()
if name:
expanded.append(name)
else:
expanded.append(pkg)
return expanded
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='list', elements='str', aliases=['pkg', 'package']),
state=dict(type='str', default='present', choices=['present', 'installed', 'latest', 'absent', 'removed']),
force=dict(type='bool', default=False),
extra_args=dict(type='str', default=''),
upgrade=dict(type='bool', default=False),
upgrade_extra_args=dict(type='str', default=''),
update_cache=dict(type='bool', default=False, aliases=['update-cache']),
update_cache_extra_args=dict(type='str', default=''),
),
required_one_of=[['name', 'update_cache', 'upgrade']],
mutually_exclusive=[['name', 'upgrade']],
supports_check_mode=True,
)
pacman_path = module.get_bin_path('pacman', True)
p = module.params
# normalize the state parameter
if p['state'] in ['present', 'installed']:
p['state'] = 'present'
elif p['state'] in ['absent', 'removed']:
p['state'] = 'absent'
if p["update_cache"] and not module.check_mode:
update_package_db(module, pacman_path)
if not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Updated the package master lists')
if p['update_cache'] and module.check_mode and not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Would have updated the package cache')
if p['upgrade']:
upgrade(module, pacman_path)
if p['name']:
pkgs = expand_package_groups(module, pacman_path, p['name'])
pkg_files = []
for i, pkg in enumerate(pkgs):
if not pkg: # avoid empty strings
continue
elif re.match(r".*\.pkg\.tar(\.(gz|bz2|xz|lrz|lzo|Z))?$", pkg):
# The package given is a filename, extract the raw pkg name from
# it and store the filename
pkg_files.append(pkg)
pkgs[i] = re.sub(r'-[0-9].*$', '', pkgs[i].split('/')[-1])
else:
pkg_files.append(None)
if module.check_mode:
check_packages(module, pacman_path, pkgs, p['state'])
if p['state'] in ['present', 'latest']:
install_packages(module, pacman_path, p['state'], pkgs, pkg_files)
elif p['state'] == 'absent':
remove_packages(module, pacman_path, pkgs)
else:
module.exit_json(changed=False, msg="No package specified to work on.")
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,952 |
k8s module documentation confusing in examples of reading from files
|
##### SUMMARY
When reading the `k8s` module documentation (https://docs.ansible.com/ansible/latest/modules/k8s_module.html#examples), the given examples were a little confusing when it came to creating objects from files.
There is one example near the top "Create a Service object by reading the definition from a file", which one might assume would mean "read the definition from a file on the controller filesystem" (as that's not very clear and could be confusing based on the rest of the examples).
Then a couple examples later, after a comment about "Passing the object definition from a file", there's a more clear task name "Create a Deployment by reading the definition from a local file" (contrast to the next example of "Read definition file from the Ansible controller file system.").
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
k8s
##### ANSIBLE VERSION
N/A
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
I propose we remove the first 'create service from file' example, since it only serves to confuse the reader. It would tighten up the list of examples, and all the file-related examples would then be under the "Passing the object definition from a file" comment, which makes more sense.
I'll file a PR for this change shortly.
|
https://github.com/ansible/ansible/issues/65952
|
https://github.com/ansible/ansible/pull/65953
|
5c69e02a35fbbb52c1c914b36f490a09c1a92bf6
|
81465455cd1f290b6de9259198030256bb617f47
| 2019-12-18T17:11:35Z |
python
| 2019-12-23T15:18:15Z |
lib/ansible/modules/clustering/k8s/k8s.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2018, Chris Houseknecht <@chouseknecht>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: k8s
short_description: Manage Kubernetes (K8s) objects
version_added: "2.6"
author:
- "Chris Houseknecht (@chouseknecht)"
- "Fabian von Feilitzsch (@fabianvf)"
description:
- Use the OpenShift Python client to perform CRUD operations on K8s objects.
- Pass the object definition from a source file or inline. See examples for reading
files and using Jinja templates or vault-encrypted files.
- Access to the full range of K8s APIs.
- Use the M(k8s_info) module to obtain a list of items about an object of type C(kind)
- Authenticate using either a config file, certificates, password or token.
- Supports check mode.
extends_documentation_fragment:
- k8s_state_options
- k8s_name_options
- k8s_resource_options
- k8s_auth_options
notes:
- If your OpenShift Python library is not 0.9.0 or newer and you are trying to
remove an item from an associative array/dictionary, for example a label or
an annotation, you will need to explicitly set the value of the item to be
removed to `null`. Simply deleting the entry in the dictionary will not
remove it from openshift or kubernetes.
options:
merge_type:
description:
- Whether to override the default patch merge approach with a specific type. By default, the strategic
merge will typically be used.
- For example, Custom Resource Definitions typically aren't updatable by the usual strategic merge. You may
want to use C(merge) if you see "strategic merge patch format is not supported"
- See U(https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#use-a-json-merge-patch-to-update-a-deployment)
- Requires openshift >= 0.6.2
- If more than one merge_type is given, the merge_types will be tried in order
- If openshift >= 0.6.2, this defaults to C(['strategic-merge', 'merge']), which is ideal for using the same parameters
on resource kinds that combine Custom Resources and built-in resources. For openshift < 0.6.2, the default
is simply C(strategic-merge).
- mutually exclusive with C(apply)
choices:
- json
- merge
- strategic-merge
type: list
version_added: "2.7"
wait:
description:
- Whether to wait for certain resource kinds to end up in the desired state. By default the module exits once Kubernetes has
received the request
- Implemented for C(state=present) for C(Deployment), C(DaemonSet) and C(Pod), and for C(state=absent) for all resource kinds.
- For resource kinds without an implementation, C(wait) returns immediately unless C(wait_condition) is set.
default: no
type: bool
version_added: "2.8"
wait_sleep:
description:
- Number of seconds to sleep between checks.
default: 5
version_added: "2.9"
wait_timeout:
description:
- How long in seconds to wait for the resource to end up in the desired state. Ignored if C(wait) is not set.
default: 120
version_added: "2.8"
wait_condition:
description:
- Specifies a custom condition on the status to wait for. Ignored if C(wait) is not set or is set to False.
suboptions:
type:
description:
- The type of condition to wait for. For example, the C(Pod) resource will set the C(Ready) condition (among others)
- Required if you are specifying a C(wait_condition). If left empty, the C(wait_condition) field will be ignored.
- The possible types for a condition are specific to each resource type in Kubernetes. See the API documentation of the status field
for a given resource to see possible choices.
status:
description:
- The value of the status field in your desired condition.
- For example, if a C(Deployment) is paused, the C(Progressing) C(type) will have the C(Unknown) status.
choices:
- True
- False
- Unknown
reason:
description:
- The value of the reason field in your desired condition
- For example, if a C(Deployment) is paused, The C(Progressing) C(type) will have the C(DeploymentPaused) reason.
- The possible reasons in a condition are specific to each resource type in Kubernetes. See the API documentation of the status field
for a given resource to see possible choices.
version_added: "2.8"
validate:
description:
- how (if at all) to validate the resource definition against the kubernetes schema.
Requires the kubernetes-validate python module
suboptions:
fail_on_error:
description: whether to fail on validation errors.
required: yes
type: bool
version:
description: version of Kubernetes to validate against. defaults to Kubernetes server version
strict:
description: whether to fail when passing unexpected properties
default: no
type: bool
version_added: "2.8"
append_hash:
description:
- Whether to append a hash to a resource name for immutability purposes
- Applies only to ConfigMap and Secret resources
- The parameter will be silently ignored for other resource kinds
- The full definition of an object is needed to generate the hash - this means that deleting an object created with append_hash
will only work if the same object is passed with state=absent (alternatively, just use state=absent with the name including
the generated hash and append_hash=no)
type: bool
version_added: "2.8"
apply:
description:
- C(apply) compares the desired resource definition with the previously supplied resource definition,
ignoring properties that are automatically generated
- C(apply) works better with Services than 'force=yes'
- mutually exclusive with C(merge_type)
type: bool
version_added: "2.9"
requirements:
- "python >= 2.7"
- "openshift >= 0.6"
- "PyYAML >= 3.11"
'''
EXAMPLES = '''
- name: Create a k8s namespace
k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Create a Service object by reading the definition from a file
k8s:
state: present
src: /testing/service.yml
- name: Remove an existing Service object
k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web
# Passing the object definition from a file
- name: Create a Deployment by reading the definition from a local file
k8s:
state: present
src: /testing/deployment.yml
- name: >-
Read definition file from the Ansible controller file system.
If the definition file has been encrypted with Ansible Vault it will automatically be decrypted.
k8s:
state: present
definition: "{{ lookup('file', '/testing/deployment.yml') }}"
- name: Read definition file from the Ansible controller file system after Jinja templating
k8s:
state: present
definition: "{{ lookup('template', '/testing/deployment.yml') }}"
- name: fail on validation errors
k8s:
state: present
definition: "{{ lookup('template', '/testing/deployment.yml') }}"
validate:
fail_on_error: yes
- name: warn on validation errors, check for unexpected properties
k8s:
state: present
definition: "{{ lookup('template', '/testing/deployment.yml') }}"
validate:
fail_on_error: no
strict: yes
'''
RETURN = '''
result:
description:
- The created, patched, or otherwise present object. Will be empty in the case of a deletion.
returned: success
type: complex
contains:
api_version:
description: The versioned schema of this representation of an object.
returned: success
type: str
kind:
description: Represents the REST resource this object represents.
returned: success
type: str
metadata:
description: Standard object metadata. Includes name, namespace, annotations, labels, etc.
returned: success
type: complex
spec:
description: Specific attributes of the object. Will vary based on the I(api_version) and I(kind).
returned: success
type: complex
status:
description: Current status details for the object.
returned: success
type: complex
items:
description: Returned only when multiple yaml documents are passed to src or resource_definition
returned: when resource_definition or src contains list of objects
type: list
duration:
description: elapsed time of task in seconds
returned: when C(wait) is true
type: int
sample: 48
'''
from ansible.module_utils.k8s.raw import KubernetesRawModule
def main():
KubernetesRawModule().execute_module()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,772 |
VMware: NoneType error while fetching details using vmware_vmkernel_facts
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Getting FATAL error when VM cluster contains ESX hosts in non-responding state
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vmkernel_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.3
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
VCS 6.5
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
simple playbook to query the vmkernel facts
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather vmkernel facts from cluster
vmware_vmkernel_facts:
cluster_name: '{{vm_cluster}}'
validate_certs: no
delegate_to: localhost
register: vmk_out
ignore_errors: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
standard output
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```
{
"exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 201, in <module>\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 196, in main\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 109, in __init__\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 119, in get_all_vmks_by_service_type\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 146, in query_service_type_for_vmks\nAttributeError: 'NoneType' object has no attribute 'selectedVnic'\n",
"_ansible_no_log": false,
"_ansible_delegated_vars": {
"ansible_host": "localhost"
},
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 201, in <module>\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 196, in main\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 109, in __init__\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 119, in get_all_vmks_by_service_type\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 146, in query_service_type_for_vmks\nAttributeError: 'NoneType' object has no attribute 'selectedVnic'\n",
"changed": false,
"module_stdout": "",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
```
|
https://github.com/ansible/ansible/issues/62772
|
https://github.com/ansible/ansible/pull/65834
|
365820f871916663b0244d9de03101b28abc6a44
|
34acabd70a7915049b544c9a798c1c5f1aa53600
| 2019-09-24T07:00:48Z |
python
| 2019-12-24T04:08:38Z |
changelogs/fragments/62772-vmware_vmkernel_info-fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,772 |
VMware: NoneType error while fetching details using vmware_vmkernel_facts
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Getting FATAL error when VM cluster contains ESX hosts in non-responding state
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_vmkernel_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.3
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
VCS 6.5
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
simple playbook to query the vmkernel facts
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather vmkernel facts from cluster
vmware_vmkernel_facts:
cluster_name: '{{vm_cluster}}'
validate_certs: no
delegate_to: localhost
register: vmk_out
ignore_errors: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
standard output
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```
{
"exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 201, in <module>\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 196, in main\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 109, in __init__\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 119, in get_all_vmks_by_service_type\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 146, in query_service_type_for_vmks\nAttributeError: 'NoneType' object has no attribute 'selectedVnic'\n",
"_ansible_no_log": false,
"_ansible_delegated_vars": {
"ansible_host": "localhost"
},
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1569306301.15-75561088679807/AnsiballZ_vmware_vmkernel_facts.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 201, in <module>\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 196, in main\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 109, in __init__\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 119, in get_all_vmks_by_service_type\n File \"/tmp/ansible_vmware_vmkernel_facts_payload_XR_16X/__main__.py\", line 146, in query_service_type_for_vmks\nAttributeError: 'NoneType' object has no attribute 'selectedVnic'\n",
"changed": false,
"module_stdout": "",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
```
|
https://github.com/ansible/ansible/issues/62772
|
https://github.com/ansible/ansible/pull/65834
|
365820f871916663b0244d9de03101b28abc6a44
|
34acabd70a7915049b544c9a798c1c5f1aa53600
| 2019-09-24T07:00:48Z |
python
| 2019-12-24T04:08:38Z |
lib/ansible/modules/cloud/vmware/vmware_vmkernel_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_vmkernel_info
short_description: Gathers VMKernel info about an ESXi host
description:
- This module can be used to gather VMKernel information about an ESXi host from given ESXi hostname or cluster name.
version_added: '2.9'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
cluster_name:
description:
- Name of the cluster.
- VMKernel information about each ESXi server will be returned for the given cluster.
- If C(esxi_hostname) is not given, this parameter is required.
type: str
esxi_hostname:
description:
- ESXi hostname.
- VMKernel information about this ESXi server will be returned.
- If C(cluster_name) is not given, this parameter is required.
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Gather VMKernel info about all ESXi Host in given Cluster
vmware_vmkernel_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: cluster_name
delegate_to: localhost
register: cluster_host_vmks
- name: Gather VMKernel info about ESXi Host
vmware_vmkernel_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
delegate_to: localhost
register: host_vmks
'''
RETURN = r'''
host_vmk_info:
description: metadata about VMKernel present on given host system
returned: success
type: dict
sample:
{
"10.76.33.208": [
{
"device": "vmk0",
"dhcp": true,
"enable_ft": false,
"enable_management": true,
"enable_vmotion": false,
"enable_vsan": false,
"ipv4_address": "10.76.33.28",
"ipv4_subnet_mask": "255.255.255.0",
"key": "key-vim.host.VirtualNic-vmk0",
"mac": "52:54:00:12:50:ce",
"mtu": 1500,
"portgroup": "Management Network",
"stack": "defaultTcpipStack"
},
]
}
'''
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
from ansible.module_utils._text import to_native
class VmkernelInfoManager(PyVmomi):
def __init__(self, module):
super(VmkernelInfoManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
self.service_type_vmks = dict()
self.get_all_vmks_by_service_type()
def get_all_vmks_by_service_type(self):
"""
Function to return information about service types and VMKernel
"""
for host in self.hosts:
self.service_type_vmks[host.name] = dict(vmotion=[], vsan=[], management=[], faultToleranceLogging=[])
for service_type in self.service_type_vmks[host.name].keys():
vmks_list = self.query_service_type_for_vmks(host, service_type)
self.service_type_vmks[host.name][service_type] = vmks_list
def query_service_type_for_vmks(self, host_system, service_type):
"""
Function to return list of VMKernels
Args:
host_system: Host system managed object
service_type: Name of service type
Returns: List of VMKernel which belongs to that service type
"""
vmks_list = []
query = None
try:
query = host_system.configManager.virtualNicManager.QueryNetConfig(service_type)
except vim.fault.HostConfigFault as config_fault:
self.module.fail_json(msg="Failed to get all VMKs for service type %s due to"
" host config fault : %s" % (service_type, to_native(config_fault.msg)))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to get all VMKs for service type %s due to"
" invalid arguments : %s" % (service_type, to_native(invalid_argument.msg)))
except Exception as e:
self.module.fail_json(msg="Failed to get all VMKs for service type %s due to"
"%s" % (service_type, to_native(e)))
if not query.selectedVnic:
return vmks_list
selected_vnics = [vnic for vnic in query.selectedVnic]
vnics_with_service_type = [vnic.device for vnic in query.candidateVnic if vnic.key in selected_vnics]
return vnics_with_service_type
def gather_host_vmk_info(self):
hosts_info = {}
for host in self.hosts:
host_vmk_info = []
host_network_system = host.config.network
if host_network_system:
vmks_config = host.config.network.vnic
for vmk in vmks_config:
host_vmk_info.append(dict(
device=vmk.device,
key=vmk.key,
portgroup=vmk.portgroup,
ipv4_address=vmk.spec.ip.ipAddress,
ipv4_subnet_mask=vmk.spec.ip.subnetMask,
dhcp=vmk.spec.ip.dhcp,
mac=vmk.spec.mac,
mtu=vmk.spec.mtu,
stack=vmk.spec.netStackInstanceKey,
enable_vsan=vmk.device in self.service_type_vmks[host.name]['vsan'],
enable_vmotion=vmk.device in self.service_type_vmks[host.name]['vmotion'],
enable_management=vmk.device in self.service_type_vmks[host.name]['management'],
enable_ft=vmk.device in self.service_type_vmks[host.name]['faultToleranceLogging'],
)
)
hosts_info[host.name] = host_vmk_info
return hosts_info
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['cluster_name', 'esxi_hostname'],
],
supports_check_mode=True
)
vmware_vmk_config = VmkernelInfoManager(module)
module.exit_json(changed=False, host_vmk_info=vmware_vmk_config.gather_host_vmk_info())
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,404 |
azure_rm_virtualmachine does not preserve availability zone info with generalized option
|
##### SUMMARY
When you try to generalize a vm with availability zones defined in it, zone info is not preserved and must be passed on.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_virtualmachine
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [ u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
When you try to generalize a vm with availability zones defined in it, zone info is not preserved and must be passed on.
Furthermore zone information it’s not returned by azure_rm_virtualmachine_info so it has to be gathered with azure_rm_resource_info in order to be able to generalize the vm
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Generalize VM
azure_rm_virtualmachine:
resource_group: "{{ resourcegroup }}"
name: "{{ vm }}"
os_type: "{{ os_type | default(omit) }}"
generalized: yes
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Generalized VM
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": "You can't change the Availability Zone of a virtual machine (have: [<zone>], want: None)"
```
##### WORKAROUND
In order to be able to generalize the VM the following code should be used.
```yaml
- name: Get VM Availability Zone
azure_rm_resource_info:
resource_group: "{{ resourcegroup }}"
resource_name: "{{ vm }}"
resource_type: VirtualMachines
provider: Compute
register: result
- name: Generalize VM
azure_rm_virtualmachine:
resource_group: "{{ resourcegroup }}"
name: "{{ vm }}"
os_type: "{{ os_type | default(omit) }}"
generalized: yes
state: present
zones: "{{ result.response[0].zones }}"
```
|
https://github.com/ansible/ansible/issues/64404
|
https://github.com/ansible/ansible/pull/66032
|
cb2a45d28686c1db6f78a8e93c402342c3ec8428
|
abad4fbf5e01d1d9bb443a4612dd8677c42298c7
| 2019-11-04T18:13:18Z |
python
| 2019-12-24T08:10:47Z |
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py
|
#!/usr/bin/python
#
# Copyright (c) 2016 Matt Davis, <[email protected]>
# Chris Houseknecht, <[email protected]>
# Copyright (c) 2018 James E. King, III (@jeking3) <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_virtualmachine
version_added: "2.1"
short_description: Manage Azure virtual machines
description:
- Manage and configure virtual machines (VMs) and associated resources on Azure.
- Requires a resource group containing at least one virtual network with at least one subnet.
- Supports images from the Azure Marketplace, which can be discovered with M(azure_rm_virtualmachineimage_facts).
- Supports custom images since Ansible 2.5.
- To use I(custom_data) on a Linux image, the image must have cloud-init enabled. If cloud-init is not enabled, I(custom_data) is ignored.
options:
resource_group:
description:
- Name of the resource group containing the VM.
required: true
name:
description:
- Name of the VM.
required: true
custom_data:
description:
- Data made available to the VM and used by C(cloud-init).
- Only used on Linux images with C(cloud-init) enabled.
- Consult U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init#cloud-init-overview) for cloud-init ready images.
- To enable cloud-init on a Linux image, follow U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cloudinit-prepare-custom-image).
version_added: "2.5"
state:
description:
- State of the VM.
- Set to C(present) to create a VM with the configuration specified by other options, or to update the configuration of an existing VM.
- Set to C(absent) to remove a VM.
- Does not affect power state. Use I(started)/I(allocated)/I(restarted) parameters to change the power state of a VM.
default: present
choices:
- absent
- present
started:
description:
- Whether the VM is started or stopped.
- Set to (true) with I(state=present) to start the VM.
- Set to C(false) to stop the VM.
default: true
type: bool
allocated:
description:
- Whether the VM is allocated or deallocated, only useful with I(state=present).
default: True
type: bool
generalized:
description:
- Whether the VM is generalized or not.
- Set to C(true) with I(state=present) to generalize the VM.
- Generalizing a VM is irreversible.
type: bool
version_added: "2.8"
restarted:
description:
- Set to C(true) with I(state=present) to restart a running VM.
type: bool
location:
description:
- Valid Azure location for the VM. Defaults to location of the resource group.
short_hostname:
description:
- Name assigned internally to the host. On a Linux VM this is the name returned by the C(hostname) command.
- When creating a VM, short_hostname defaults to I(name).
vm_size:
description:
- A valid Azure VM size value. For example, C(Standard_D4).
- Choices vary depending on the subscription and location. Check your subscription for available choices.
- Required when creating a VM.
admin_username:
description:
- Admin username used to access the VM after it is created.
- Required when creating a VM.
admin_password:
description:
- Password for the admin username.
- Not required if the I(os_type=Linux) and SSH password authentication is disabled by setting I(ssh_password_enabled=false).
ssh_password_enabled:
description:
- Whether to enable or disable SSH passwords.
- When I(os_type=Linux), set to C(false) to disable SSH password authentication and require use of SSH keys.
default: true
type: bool
ssh_public_keys:
description:
- For I(os_type=Linux) provide a list of SSH keys.
- Accepts a list of dicts where each dictionary contains two keys, I(path) and I(key_data).
- Set I(path) to the default location of the authorized_keys files. For example, I(path=/home/<admin username>/.ssh/authorized_keys).
- Set I(key_data) to the actual value of the public key.
image:
description:
- The image used to build the VM.
- For custom images, the name of the image. To narrow the search to a specific resource group, a dict with the keys I(name) and I(resource_group).
- For Marketplace images, a dict with the keys I(publisher), I(offer), I(sku), and I(version).
- Set I(version=latest) to get the most recent version of a given image.
required: true
availability_set:
description:
- Name or ID of an existing availability set to add the VM to. The I(availability_set) should be in the same resource group as VM.
version_added: "2.5"
storage_account_name:
description:
- Name of a storage account that supports creation of VHD blobs.
- If not specified for a new VM, a new storage account named <vm name>01 will be created using storage type C(Standard_LRS).
aliases:
- storage_account
storage_container_name:
description:
- Name of the container to use within the storage account to store VHD blobs.
- If not specified, a default container will be created.
default: vhds
aliases:
- storage_container
storage_blob_name:
description:
- Name of the storage blob used to hold the OS disk image of the VM.
- Must end with '.vhd'.
- If not specified, defaults to the VM name + '.vhd'.
aliases:
- storage_blob
managed_disk_type:
description:
- Managed OS disk type.
- Create OS disk with managed disk if defined.
- If not defined, the OS disk will be created with virtual hard disk (VHD).
choices:
- Standard_LRS
- StandardSSD_LRS
- Premium_LRS
version_added: "2.4"
os_disk_name:
description:
- OS disk name.
version_added: "2.8"
os_disk_caching:
description:
- Type of OS disk caching.
choices:
- ReadOnly
- ReadWrite
aliases:
- disk_caching
os_disk_size_gb:
description:
- Type of OS disk size in GB.
version_added: "2.7"
os_type:
description:
- Base type of operating system.
choices:
- Windows
- Linux
default: Linux
data_disks:
description:
- Describes list of data disks.
- Use M(azure_rm_mangeddisk) to manage the specific disk.
version_added: "2.4"
suboptions:
lun:
description:
- The logical unit number for data disk.
- This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM.
required: true
version_added: "2.4"
disk_size_gb:
description:
- The initial disk size in GB for blank data disks.
- This value cannot be larger than C(1023) GB.
- Size can be changed only when the virtual machine is deallocated.
- Not sure when I(managed_disk_id) defined.
version_added: "2.4"
managed_disk_type:
description:
- Managed data disk type.
- Only used when OS disk created with managed disk.
choices:
- Standard_LRS
- StandardSSD_LRS
- Premium_LRS
version_added: "2.4"
storage_account_name:
description:
- Name of an existing storage account that supports creation of VHD blobs.
- If not specified for a new VM, a new storage account started with I(name) will be created using storage type C(Standard_LRS).
- Only used when OS disk created with virtual hard disk (VHD).
- Used when I(managed_disk_type) not defined.
- Cannot be updated unless I(lun) updated.
version_added: "2.4"
storage_container_name:
description:
- Name of the container to use within the storage account to store VHD blobs.
- If no name is specified a default container named 'vhds' will created.
- Only used when OS disk created with virtual hard disk (VHD).
- Used when I(managed_disk_type) not defined.
- Cannot be updated unless I(lun) updated.
default: vhds
version_added: "2.4"
storage_blob_name:
description:
- Name of the storage blob used to hold the OS disk image of the VM.
- Must end with '.vhd'.
- Default to the I(name) + timestamp + I(lun) + '.vhd'.
- Only used when OS disk created with virtual hard disk (VHD).
- Used when I(managed_disk_type) not defined.
- Cannot be updated unless I(lun) updated.
version_added: "2.4"
caching:
description:
- Type of data disk caching.
choices:
- ReadOnly
- ReadWrite
default: ReadOnly
version_added: "2.4"
public_ip_allocation_method:
description:
- Allocation method for the public IP of the VM.
- Used only if a network interface is not specified.
- When set to C(Dynamic), the public IP address may change any time the VM is rebooted or power cycled.
- The C(Disabled) choice was added in Ansible 2.6.
choices:
- Dynamic
- Static
- Disabled
default: Static
aliases:
- public_ip_allocation
open_ports:
description:
- List of ports to open in the security group for the VM, when a security group and network interface are created with a VM.
- For Linux hosts, defaults to allowing inbound TCP connections to port 22.
- For Windows hosts, defaults to opening ports 3389 and 5986.
network_interface_names:
description:
- Network interface names to add to the VM.
- Can be a string of name or resource ID of the network interface.
- Can be a dict containing I(resource_group) and I(name) of the network interface.
- If a network interface name is not provided when the VM is created, a default network interface will be created.
- To create a new network interface, at least one Virtual Network with one Subnet must exist.
type: list
aliases:
- network_interfaces
virtual_network_resource_group:
description:
- The resource group to use when creating a VM with another resource group's virtual network.
version_added: "2.4"
virtual_network_name:
description:
- The virtual network to use when creating a VM.
- If not specified, a new network interface will be created and assigned to the first virtual network found in the resource group.
- Use with I(virtual_network_resource_group) to place the virtual network in another resource group.
aliases:
- virtual_network
subnet_name:
description:
- Subnet for the VM.
- Defaults to the first subnet found in the virtual network or the subnet of the I(network_interface_name), if provided.
- If the subnet is in another resource group, specify the resource group with I(virtual_network_resource_group).
aliases:
- subnet
remove_on_absent:
description:
- Associated resources to remove when removing a VM using I(state=absent).
- To remove all resources related to the VM being removed, including auto-created resources, set to C(all).
- To remove only resources that were automatically created while provisioning the VM being removed, set to C(all_autocreated).
- To remove only specific resources, set to C(network_interfaces), C(virtual_storage) or C(public_ips).
- Any other input will be ignored.
type: list
default: ['all']
plan:
description:
- Third-party billing plan for the VM.
version_added: "2.5"
type: dict
suboptions:
name:
description:
- Billing plan name.
required: true
product:
description:
- Product name.
required: true
publisher:
description:
- Publisher offering the plan.
required: true
promotion_code:
description:
- Optional promotion code.
accept_terms:
description:
- Accept terms for Marketplace images that require it.
- Only Azure service admin/account admin users can purchase images from the Marketplace.
- Only valid when a I(plan) is specified.
type: bool
default: false
version_added: "2.7"
zones:
description:
- A list of Availability Zones for your VM.
type: list
version_added: "2.8"
license_type:
description:
- On-premise license for the image or disk.
- Only used for images that contain the Windows Server operating system.
- To remove all license type settings, set to the string C(None).
version_added: "2.8"
choices:
- Windows_Server
- Windows_Client
vm_identity:
description:
- Identity for the VM.
version_added: "2.8"
choices:
- SystemAssigned
winrm:
description:
- List of Windows Remote Management configurations of the VM.
version_added: "2.8"
suboptions:
protocol:
description:
- The protocol of the winrm listener.
required: true
choices:
- http
- https
source_vault:
description:
- The relative URL of the Key Vault containing the certificate.
certificate_url:
description:
- The URL of a certificate that has been uploaded to Key Vault as a secret.
certificate_store:
description:
- The certificate store on the VM to which the certificate should be added.
- The specified certificate store is implicitly in the LocalMachine account.
boot_diagnostics:
description:
- Manage boot diagnostics settings for a VM.
- Boot diagnostics includes a serial console and remote console screenshots.
version_added: '2.9'
suboptions:
enabled:
description:
- Flag indicating if boot diagnostics are enabled.
required: true
type: bool
storage_account:
description:
- The name of an existing storage account to use for boot diagnostics.
- If not specified, uses I(storage_account_name) defined one level up.
- If storage account is not specified anywhere, and C(enabled) is C(true), a default storage account is created for boot diagnostics data.
required: false
extends_documentation_fragment:
- azure
- azure_tags
author:
- Chris Houseknecht (@chouseknecht)
- Matt Davis (@nitzmahone)
- Christopher Perrin (@cperrin88)
- James E. King III (@jeking3)
'''
EXAMPLES = '''
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
- name: Create an availability set for managed disk vm
azure_rm_availabilityset:
name: avs-managed-disk
resource_group: myResourceGroup
platform_update_domain_count: 5
platform_fault_domain_count: 2
sku: Aligned
- name: Create a VM with managed disk
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: vm-managed-disk
admin_username: adminUser
availability_set: avs-managed-disk
managed_disk_type: Standard_LRS
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
vm_size: Standard_D4
- name: Create a VM with existing storage account and NIC
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
vm_size: Standard_D4
storage_account: testaccount001
admin_username: adminUser
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
network_interfaces: testvm001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
- name: Create a VM with OS and multiple data managed disks
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_D4
managed_disk_type: Standard_LRS
admin_username: adminUser
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
data_disks:
- lun: 0
managed_disk_id: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Compute/disks/myDisk"
- lun: 1
disk_size_gb: 128
managed_disk_type: Premium_LRS
- name: Create a VM with OS and multiple data storage accounts
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
network_interfaces: testvm001
storage_container: osdisk
storage_blob: osdisk.vhd
boot_diagnostics:
enabled: yes
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
data_disks:
- lun: 0
disk_size_gb: 64
storage_container_name: datadisk1
storage_blob_name: datadisk1.vhd
- lun: 1
disk_size_gb: 128
storage_container_name: datadisk2
storage_blob_name: datadisk2.vhd
- name: Create a VM with a custom image
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image: customimage001
- name: Create a VM with a custom image from a particular resource group
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image:
name: customimage001
resource_group: myResourceGroup
- name: Create a VM with an image id
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image:
id: '{{image_id}}'
- name: Create VM with specified OS disk size
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: big-os-disk
admin_username: chouseknecht
admin_password: <your password here>
os_disk_size_gb: 512
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
- name: Create VM with OS and Plan, accepting the terms
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: f5-nva
admin_username: chouseknecht
admin_password: <your password here>
image:
publisher: f5-networks
offer: f5-big-ip-best
sku: f5-bigip-virtual-edition-200m-best-hourly
version: latest
plan:
name: f5-bigip-virtual-edition-200m-best-hourly
product: f5-big-ip-best
publisher: f5-networks
- name: Power Off
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
started: no
- name: Deallocate
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
allocated: no
- name: Power On
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
- name: Restart
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
restarted: yes
- name: Create a VM with an Availability Zone
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image: customimage001
zones: [1]
- name: Remove a VM and all resources that were autocreated
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
remove_on_absent: all_autocreated
state: absent
'''
RETURN = '''
powerstate:
description:
- Indicates if the state is C(running), C(stopped), C(deallocated), C(generalized).
returned: always
type: str
sample: running
deleted_vhd_uris:
description:
- List of deleted Virtual Hard Disk URIs.
returned: 'on delete'
type: list
sample: ["https://testvm104519.blob.core.windows.net/vhds/testvm10.vhd"]
deleted_network_interfaces:
description:
- List of deleted NICs.
returned: 'on delete'
type: list
sample: ["testvm1001"]
deleted_public_ips:
description:
- List of deleted public IP address names.
returned: 'on delete'
type: list
sample: ["testvm1001"]
azure_vm:
description:
- Facts about the current state of the object. Note that facts are not part of the registered output but available directly.
returned: always
type: dict
sample: {
"properties": {
"availabilitySet": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Compute/availabilitySets/MYAVAILABILITYSET"
},
"hardwareProfile": {
"vmSize": "Standard_D1"
},
"instanceView": {
"disks": [
{
"name": "testvm10.vhd",
"statuses": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Provisioning succeeded",
"level": "Info",
"time": "2016-03-30T07:11:16.187272Z"
}
]
}
],
"statuses": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Provisioning succeeded",
"level": "Info",
"time": "2016-03-30T20:33:38.946916Z"
},
{
"code": "PowerState/running",
"displayStatus": "VM running",
"level": "Info"
}
],
"vmAgent": {
"extensionHandlers": [],
"statuses": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Ready",
"level": "Info",
"message": "GuestAgent is running and accepting new configurations.",
"time": "2016-03-30T20:31:16.000Z"
}
],
"vmAgentVersion": "WALinuxAgent-2.0.16"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01",
"name": "testvm10_NIC01",
"properties": {
"dnsSettings": {
"appliedDnsServers": [],
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"etag": 'W/"041c8c2a-d5dd-4cd7-8465-9125cfbe2cf8"',
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01/ipConfigurations/default",
"name": "default",
"properties": {
"privateIPAddress": "10.10.0.5",
"privateIPAllocationMethod": "Dynamic",
"provisioningState": "Succeeded",
"publicIPAddress": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/testvm10_PIP01",
"name": "testvm10_PIP01",
"properties": {
"idleTimeoutInMinutes": 4,
"ipAddress": "13.92.246.197",
"ipConfiguration": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01/ipConfigurations/default"
},
"provisioningState": "Succeeded",
"publicIPAllocationMethod": "Static",
"resourceGuid": "3447d987-ca0d-4eca-818b-5dddc0625b42"
}
}
}
}
],
"macAddress": "00-0D-3A-12-AA-14",
"primary": true,
"provisioningState": "Succeeded",
"resourceGuid": "10979e12-ccf9-42ee-9f6d-ff2cc63b3844",
"virtualMachine": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Compute/virtualMachines/testvm10"
}
}
}
]
},
"osProfile": {
"adminUsername": "chouseknecht",
"computerName": "test10",
"linuxConfiguration": {
"disablePasswordAuthentication": false
},
"secrets": []
},
"provisioningState": "Succeeded",
"storageProfile": {
"dataDisks": [
{
"caching": "ReadWrite",
"createOption": "empty",
"diskSizeGB": 64,
"lun": 0,
"name": "datadisk1.vhd",
"vhd": {
"uri": "https://testvm10sa1.blob.core.windows.net/datadisk/datadisk1.vhd"
}
}
],
"imageReference": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "7.1.20160308"
},
"osDisk": {
"caching": "ReadOnly",
"createOption": "fromImage",
"name": "testvm10.vhd",
"osType": "Linux",
"vhd": {
"uri": "https://testvm10sa1.blob.core.windows.net/vhds/testvm10.vhd"
}
}
}
},
"type": "Microsoft.Compute/virtualMachines"
}
''' # NOQA
import base64
import random
import re
try:
from msrestazure.azure_exceptions import CloudError
from msrestazure.tools import parse_resource_id
from msrest.polling import LROPoller
except ImportError:
# This is handled in azure_rm_common
pass
from ansible.module_utils.basic import to_native, to_bytes
from ansible.module_utils.azure_rm_common import AzureRMModuleBase, azure_id_to_dict, normalize_location_name, format_resource_id
AZURE_OBJECT_CLASS = 'VirtualMachine'
AZURE_ENUM_MODULES = ['azure.mgmt.compute.models']
def extract_names_from_blob_uri(blob_uri, storage_suffix):
# HACK: ditch this once python SDK supports get by URI
m = re.match(r'^https?://(?P<accountname>[^.]+)\.blob\.{0}/'
r'(?P<containername>[^/]+)/(?P<blobname>.+)$'.format(storage_suffix), blob_uri)
if not m:
raise Exception("unable to parse blob uri '%s'" % blob_uri)
extracted_names = m.groupdict()
return extracted_names
class AzureRMVirtualMachine(AzureRMModuleBase):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(type='str', required=True),
name=dict(type='str', required=True),
custom_data=dict(type='str'),
state=dict(choices=['present', 'absent'], default='present', type='str'),
location=dict(type='str'),
short_hostname=dict(type='str'),
vm_size=dict(type='str'),
admin_username=dict(type='str'),
admin_password=dict(type='str', no_log=True),
ssh_password_enabled=dict(type='bool', default=True),
ssh_public_keys=dict(type='list'),
image=dict(type='raw'),
availability_set=dict(type='str'),
storage_account_name=dict(type='str', aliases=['storage_account']),
storage_container_name=dict(type='str', aliases=['storage_container'], default='vhds'),
storage_blob_name=dict(type='str', aliases=['storage_blob']),
os_disk_caching=dict(type='str', aliases=['disk_caching'], choices=['ReadOnly', 'ReadWrite']),
os_disk_size_gb=dict(type='int'),
managed_disk_type=dict(type='str', choices=['Standard_LRS', 'StandardSSD_LRS', 'Premium_LRS']),
os_disk_name=dict(type='str'),
os_type=dict(type='str', choices=['Linux', 'Windows'], default='Linux'),
public_ip_allocation_method=dict(type='str', choices=['Dynamic', 'Static', 'Disabled'], default='Static',
aliases=['public_ip_allocation']),
open_ports=dict(type='list'),
network_interface_names=dict(type='list', aliases=['network_interfaces'], elements='raw'),
remove_on_absent=dict(type='list', default=['all']),
virtual_network_resource_group=dict(type='str'),
virtual_network_name=dict(type='str', aliases=['virtual_network']),
subnet_name=dict(type='str', aliases=['subnet']),
allocated=dict(type='bool', default=True),
restarted=dict(type='bool', default=False),
started=dict(type='bool', default=True),
generalized=dict(type='bool', default=False),
data_disks=dict(type='list'),
plan=dict(type='dict'),
zones=dict(type='list'),
accept_terms=dict(type='bool', default=False),
license_type=dict(type='str', choices=['Windows_Server', 'Windows_Client']),
vm_identity=dict(type='str', choices=['SystemAssigned']),
winrm=dict(type='list'),
boot_diagnostics=dict(type='dict'),
)
self.resource_group = None
self.name = None
self.custom_data = None
self.state = None
self.location = None
self.short_hostname = None
self.vm_size = None
self.admin_username = None
self.admin_password = None
self.ssh_password_enabled = None
self.ssh_public_keys = None
self.image = None
self.availability_set = None
self.storage_account_name = None
self.storage_container_name = None
self.storage_blob_name = None
self.os_type = None
self.os_disk_caching = None
self.os_disk_size_gb = None
self.managed_disk_type = None
self.os_disk_name = None
self.network_interface_names = None
self.remove_on_absent = set()
self.tags = None
self.force = None
self.public_ip_allocation_method = None
self.open_ports = None
self.virtual_network_resource_group = None
self.virtual_network_name = None
self.subnet_name = None
self.allocated = None
self.restarted = None
self.started = None
self.generalized = None
self.differences = None
self.data_disks = None
self.plan = None
self.accept_terms = None
self.zones = None
self.license_type = None
self.vm_identity = None
self.boot_diagnostics = None
self.results = dict(
changed=False,
actions=[],
powerstate_change=None,
ansible_facts=dict(azure_vm=None)
)
super(AzureRMVirtualMachine, self).__init__(derived_arg_spec=self.module_arg_spec,
supports_check_mode=True)
@property
def boot_diagnostics_present(self):
return self.boot_diagnostics is not None and 'enabled' in self.boot_diagnostics
def get_boot_diagnostics_storage_account(self, limited=False, vm_dict=None):
"""
Get the boot diagnostics storage account.
Arguments:
- limited - if true, limit the logic to the boot_diagnostics storage account
this is used if initial creation of the VM has a stanza with
boot_diagnostics disabled, so we only create a storage account
if the user specifies a storage account name inside the boot_diagnostics
schema
- vm_dict - if invoked on an update, this is the current state of the vm including
tags, like the default storage group tag '_own_sa_'.
Normal behavior:
- try the self.boot_diagnostics.storage_account field
- if not there, try the self.storage_account_name field
- if not there, use the default storage account
If limited is True:
- try the self.boot_diagnostics.storage_account field
- if not there, None
"""
bsa = None
if 'storage_account' in self.boot_diagnostics:
bsa = self.get_storage_account(self.boot_diagnostics['storage_account'])
elif limited:
return None
elif self.storage_account_name:
bsa = self.get_storage_account(self.storage_account_name)
else:
bsa = self.create_default_storage_account(vm_dict=vm_dict)
self.log("boot diagnostics storage account:")
self.log(self.serialize_obj(bsa, 'StorageAccount'), pretty_print=True)
return bsa
def exec_module(self, **kwargs):
for key in list(self.module_arg_spec.keys()) + ['tags']:
setattr(self, key, kwargs[key])
# make sure options are lower case
self.remove_on_absent = set([resource.lower() for resource in self.remove_on_absent])
# convert elements to ints
self.zones = [int(i) for i in self.zones] if self.zones else None
changed = False
powerstate_change = None
results = dict()
vm = None
network_interfaces = []
requested_storage_uri = None
requested_vhd_uri = None
data_disk_requested_vhd_uri = None
disable_ssh_password = None
vm_dict = None
image_reference = None
custom_image = False
resource_group = self.get_resource_group(self.resource_group)
if not self.location:
# Set default location
self.location = resource_group.location
self.location = normalize_location_name(self.location)
if self.state == 'present':
# Verify parameters and resolve any defaults
if self.vm_size and not self.vm_size_is_valid():
self.fail("Parameter error: vm_size {0} is not valid for your subscription and location.".format(
self.vm_size
))
if self.network_interface_names:
for nic_name in self.network_interface_names:
nic = self.parse_network_interface(nic_name)
network_interfaces.append(nic)
if self.ssh_public_keys:
msg = "Parameter error: expecting ssh_public_keys to be a list of type dict where " \
"each dict contains keys: path, key_data."
for key in self.ssh_public_keys:
if not isinstance(key, dict):
self.fail(msg)
if not key.get('path') or not key.get('key_data'):
self.fail(msg)
if self.image and isinstance(self.image, dict):
if all(key in self.image for key in ('publisher', 'offer', 'sku', 'version')):
marketplace_image = self.get_marketplace_image_version()
if self.image['version'] == 'latest':
self.image['version'] = marketplace_image.name
self.log("Using image version {0}".format(self.image['version']))
image_reference = self.compute_models.ImageReference(
publisher=self.image['publisher'],
offer=self.image['offer'],
sku=self.image['sku'],
version=self.image['version']
)
elif self.image.get('name'):
custom_image = True
image_reference = self.get_custom_image_reference(
self.image.get('name'),
self.image.get('resource_group'))
elif self.image.get('id'):
try:
image_reference = self.compute_models.ImageReference(id=self.image['id'])
except Exception as exc:
self.fail("id Error: Cannot get image from the reference id - {0}".format(self.image['id']))
else:
self.fail("parameter error: expecting image to contain [publisher, offer, sku, version], [name, resource_group] or [id]")
elif self.image and isinstance(self.image, str):
custom_image = True
image_reference = self.get_custom_image_reference(self.image)
elif self.image:
self.fail("parameter error: expecting image to be a string or dict not {0}".format(type(self.image).__name__))
if self.plan:
if not self.plan.get('name') or not self.plan.get('product') or not self.plan.get('publisher'):
self.fail("parameter error: plan must include name, product, and publisher")
if not self.storage_blob_name and not self.managed_disk_type:
self.storage_blob_name = self.name + '.vhd'
elif self.managed_disk_type:
self.storage_blob_name = self.name
if self.storage_account_name and not self.managed_disk_type:
properties = self.get_storage_account(self.storage_account_name)
requested_storage_uri = properties.primary_endpoints.blob
requested_vhd_uri = '{0}{1}/{2}'.format(requested_storage_uri,
self.storage_container_name,
self.storage_blob_name)
disable_ssh_password = not self.ssh_password_enabled
try:
self.log("Fetching virtual machine {0}".format(self.name))
vm = self.compute_client.virtual_machines.get(self.resource_group, self.name, expand='instanceview')
self.check_provisioning_state(vm, self.state)
vm_dict = self.serialize_vm(vm)
if self.state == 'present':
differences = []
current_nics = []
results = vm_dict
# Try to determine if the VM needs to be updated
if self.network_interface_names:
for nic in vm_dict['properties']['networkProfile']['networkInterfaces']:
current_nics.append(nic['id'])
if set(current_nics) != set(network_interfaces):
self.log('CHANGED: virtual machine {0} - network interfaces are different.'.format(self.name))
differences.append('Network Interfaces')
updated_nics = [dict(id=id, primary=(i == 0))
for i, id in enumerate(network_interfaces)]
vm_dict['properties']['networkProfile']['networkInterfaces'] = updated_nics
changed = True
if self.os_disk_caching and \
self.os_disk_caching != vm_dict['properties']['storageProfile']['osDisk']['caching']:
self.log('CHANGED: virtual machine {0} - OS disk caching'.format(self.name))
differences.append('OS Disk caching')
changed = True
vm_dict['properties']['storageProfile']['osDisk']['caching'] = self.os_disk_caching
if self.os_disk_name and \
self.os_disk_name != vm_dict['properties']['storageProfile']['osDisk']['name']:
self.log('CHANGED: virtual machine {0} - OS disk name'.format(self.name))
differences.append('OS Disk name')
changed = True
vm_dict['properties']['storageProfile']['osDisk']['name'] = self.os_disk_name
if self.os_disk_size_gb and \
self.os_disk_size_gb != vm_dict['properties']['storageProfile']['osDisk'].get('diskSizeGB'):
self.log('CHANGED: virtual machine {0} - OS disk size '.format(self.name))
differences.append('OS Disk size')
changed = True
vm_dict['properties']['storageProfile']['osDisk']['diskSizeGB'] = self.os_disk_size_gb
if self.vm_size and \
self.vm_size != vm_dict['properties']['hardwareProfile']['vmSize']:
self.log('CHANGED: virtual machine {0} - size '.format(self.name))
differences.append('VM size')
changed = True
vm_dict['properties']['hardwareProfile']['vmSize'] = self.vm_size
update_tags, vm_dict['tags'] = self.update_tags(vm_dict.get('tags', dict()))
if update_tags:
differences.append('Tags')
changed = True
if self.short_hostname and self.short_hostname != vm_dict['properties']['osProfile']['computerName']:
self.log('CHANGED: virtual machine {0} - short hostname'.format(self.name))
differences.append('Short Hostname')
changed = True
vm_dict['properties']['osProfile']['computerName'] = self.short_hostname
if self.started and vm_dict['powerstate'] not in ['starting', 'running'] and self.allocated:
self.log("CHANGED: virtual machine {0} not running and requested state 'running'".format(self.name))
changed = True
powerstate_change = 'poweron'
elif self.state == 'present' and vm_dict['powerstate'] == 'running' and self.restarted:
self.log("CHANGED: virtual machine {0} {1} and requested state 'restarted'"
.format(self.name, vm_dict['powerstate']))
changed = True
powerstate_change = 'restarted'
elif self.state == 'present' and not self.allocated and vm_dict['powerstate'] not in ['deallocated', 'deallocating']:
self.log("CHANGED: virtual machine {0} {1} and requested state 'deallocated'"
.format(self.name, vm_dict['powerstate']))
changed = True
powerstate_change = 'deallocated'
elif not self.started and vm_dict['powerstate'] == 'running':
self.log("CHANGED: virtual machine {0} running and requested state 'stopped'".format(self.name))
changed = True
powerstate_change = 'poweroff'
elif self.generalized and vm_dict['powerstate'] != 'generalized':
self.log("CHANGED: virtual machine {0} requested to be 'generalized'".format(self.name))
changed = True
powerstate_change = 'generalized'
vm_dict['zones'] = [int(i) for i in vm_dict['zones']] if 'zones' in vm_dict and vm_dict['zones'] else None
if self.zones != vm_dict['zones']:
self.log("CHANGED: virtual machine {0} zones".format(self.name))
differences.append('Zones')
changed = True
if self.license_type is not None and vm_dict['properties'].get('licenseType') != self.license_type:
differences.append('License Type')
changed = True
# Defaults for boot diagnostics
if 'diagnosticsProfile' not in vm_dict['properties']:
vm_dict['properties']['diagnosticsProfile'] = {}
if 'bootDiagnostics' not in vm_dict['properties']['diagnosticsProfile']:
vm_dict['properties']['diagnosticsProfile']['bootDiagnostics'] = {
'enabled': False,
'storageUri': None
}
if self.boot_diagnostics_present:
current_boot_diagnostics = vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']
boot_diagnostics_changed = False
if self.boot_diagnostics['enabled'] != current_boot_diagnostics['enabled']:
current_boot_diagnostics['enabled'] = self.boot_diagnostics['enabled']
boot_diagnostics_changed = True
boot_diagnostics_storage_account = self.get_boot_diagnostics_storage_account(
limited=not self.boot_diagnostics['enabled'], vm_dict=vm_dict)
boot_diagnostics_blob = boot_diagnostics_storage_account.primary_endpoints.blob if boot_diagnostics_storage_account else None
if current_boot_diagnostics['storageUri'] != boot_diagnostics_blob:
current_boot_diagnostics['storageUri'] = boot_diagnostics_blob
boot_diagnostics_changed = True
if boot_diagnostics_changed:
differences.append('Boot Diagnostics')
changed = True
# Adding boot diagnostics can create a default storage account after initial creation
# this means we might also need to update the _own_sa_ tag
own_sa = (self.tags or {}).get('_own_sa_', None)
cur_sa = vm_dict.get('tags', {}).get('_own_sa_', None)
if own_sa and own_sa != cur_sa:
if 'Tags' not in differences:
differences.append('Tags')
if 'tags' not in vm_dict:
vm_dict['tags'] = {}
vm_dict['tags']['_own_sa_'] = own_sa
changed = True
self.differences = differences
elif self.state == 'absent':
self.log("CHANGED: virtual machine {0} exists and requested state is 'absent'".format(self.name))
results = dict()
changed = True
except CloudError:
self.log('Virtual machine {0} does not exist'.format(self.name))
if self.state == 'present':
self.log("CHANGED: virtual machine {0} does not exist but state is 'present'.".format(self.name))
changed = True
self.results['changed'] = changed
self.results['ansible_facts']['azure_vm'] = results
self.results['powerstate_change'] = powerstate_change
if self.check_mode:
return self.results
if changed:
if self.state == 'present':
if not vm:
# Create the VM
self.log("Create virtual machine {0}".format(self.name))
self.results['actions'].append('Created VM {0}'.format(self.name))
if self.os_type == 'Linux':
if disable_ssh_password and not self.ssh_public_keys:
self.fail("Parameter error: ssh_public_keys required when disabling SSH password.")
if not image_reference:
self.fail("Parameter error: an image is required when creating a virtual machine.")
availability_set_resource = None
if self.availability_set:
parsed_availability_set = parse_resource_id(self.availability_set)
availability_set = self.get_availability_set(parsed_availability_set.get('resource_group', self.resource_group),
parsed_availability_set.get('name'))
availability_set_resource = self.compute_models.SubResource(id=availability_set.id)
if self.zones:
self.fail("Parameter error: you can't use Availability Set and Availability Zones at the same time")
# Get defaults
if not self.network_interface_names:
default_nic = self.create_default_nic()
self.log("network interface:")
self.log(self.serialize_obj(default_nic, 'NetworkInterface'), pretty_print=True)
network_interfaces = [default_nic.id]
# os disk
if not self.storage_account_name and not self.managed_disk_type:
storage_account = self.create_default_storage_account()
self.log("os disk storage account:")
self.log(self.serialize_obj(storage_account, 'StorageAccount'), pretty_print=True)
requested_storage_uri = 'https://{0}.blob.{1}/'.format(
storage_account.name,
self._cloud_environment.suffixes.storage_endpoint)
requested_vhd_uri = '{0}{1}/{2}'.format(
requested_storage_uri,
self.storage_container_name,
self.storage_blob_name)
# disk caching
if not self.os_disk_caching:
self.os_disk_caching = 'ReadOnly'
if not self.short_hostname:
self.short_hostname = self.name
nics = [self.compute_models.NetworkInterfaceReference(id=id, primary=(i == 0))
for i, id in enumerate(network_interfaces)]
# os disk
if self.managed_disk_type:
vhd = None
managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=self.managed_disk_type)
elif custom_image:
vhd = None
managed_disk = None
else:
vhd = self.compute_models.VirtualHardDisk(uri=requested_vhd_uri)
managed_disk = None
plan = None
if self.plan:
plan = self.compute_models.Plan(name=self.plan.get('name'), product=self.plan.get('product'),
publisher=self.plan.get('publisher'),
promotion_code=self.plan.get('promotion_code'))
# do this before creating vm_resource as it can modify tags
if self.boot_diagnostics_present and self.boot_diagnostics['enabled']:
boot_diag_storage_account = self.get_boot_diagnostics_storage_account()
os_profile = None
if self.admin_username:
os_profile = self.compute_models.OSProfile(
admin_username=self.admin_username,
computer_name=self.short_hostname,
)
vm_resource = self.compute_models.VirtualMachine(
location=self.location,
tags=self.tags,
os_profile=os_profile,
hardware_profile=self.compute_models.HardwareProfile(
vm_size=self.vm_size
),
storage_profile=self.compute_models.StorageProfile(
os_disk=self.compute_models.OSDisk(
name=self.os_disk_name if self.os_disk_name else self.storage_blob_name,
vhd=vhd,
managed_disk=managed_disk,
create_option=self.compute_models.DiskCreateOptionTypes.from_image,
caching=self.os_disk_caching,
disk_size_gb=self.os_disk_size_gb
),
image_reference=image_reference,
),
network_profile=self.compute_models.NetworkProfile(
network_interfaces=nics
),
availability_set=availability_set_resource,
plan=plan,
zones=self.zones,
)
if self.license_type is not None:
vm_resource.license_type = self.license_type
if self.vm_identity:
vm_resource.identity = self.compute_models.VirtualMachineIdentity(type=self.vm_identity)
if self.winrm:
winrm_listeners = list()
for winrm_listener in self.winrm:
winrm_listeners.append(self.compute_models.WinRMListener(
protocol=winrm_listener.get('protocol'),
certificate_url=winrm_listener.get('certificate_url')
))
if winrm_listener.get('source_vault'):
if not vm_resource.os_profile.secrets:
vm_resource.os_profile.secrets = list()
vm_resource.os_profile.secrets.append(self.compute_models.VaultSecretGroup(
source_vault=self.compute_models.SubResource(
id=winrm_listener.get('source_vault')
),
vault_certificates=[
self.compute_models.VaultCertificate(
certificate_url=winrm_listener.get('certificate_url'),
certificate_store=winrm_listener.get('certificate_store')
),
]
))
winrm = self.compute_models.WinRMConfiguration(
listeners=winrm_listeners
)
if not vm_resource.os_profile.windows_configuration:
vm_resource.os_profile.windows_configuration = self.compute_models.WindowsConfiguration(
win_rm=winrm
)
elif not vm_resource.os_profile.windows_configuration.win_rm:
vm_resource.os_profile.windows_configuration.win_rm = winrm
if self.boot_diagnostics_present:
vm_resource.diagnostics_profile = self.compute_models.DiagnosticsProfile(
boot_diagnostics=self.compute_models.BootDiagnostics(
enabled=self.boot_diagnostics['enabled'],
storage_uri=boot_diag_storage_account.primary_endpoints.blob))
if self.admin_password:
vm_resource.os_profile.admin_password = self.admin_password
if self.custom_data:
# Azure SDK (erroneously?) wants native string type for this
vm_resource.os_profile.custom_data = to_native(base64.b64encode(to_bytes(self.custom_data)))
if self.os_type == 'Linux' and os_profile:
vm_resource.os_profile.linux_configuration = self.compute_models.LinuxConfiguration(
disable_password_authentication=disable_ssh_password
)
if self.ssh_public_keys:
ssh_config = self.compute_models.SshConfiguration()
ssh_config.public_keys = \
[self.compute_models.SshPublicKey(path=key['path'], key_data=key['key_data']) for key in self.ssh_public_keys]
vm_resource.os_profile.linux_configuration.ssh = ssh_config
# data disk
if self.data_disks:
data_disks = []
count = 0
for data_disk in self.data_disks:
if not data_disk.get('managed_disk_type'):
if not data_disk.get('storage_blob_name'):
data_disk['storage_blob_name'] = self.name + '-data-' + str(count) + '.vhd'
count += 1
if data_disk.get('storage_account_name'):
data_disk_storage_account = self.get_storage_account(data_disk['storage_account_name'])
else:
data_disk_storage_account = self.create_default_storage_account()
self.log("data disk storage account:")
self.log(self.serialize_obj(data_disk_storage_account, 'StorageAccount'), pretty_print=True)
if not data_disk.get('storage_container_name'):
data_disk['storage_container_name'] = 'vhds'
data_disk_requested_vhd_uri = 'https://{0}.blob.{1}/{2}/{3}'.format(
data_disk_storage_account.name,
self._cloud_environment.suffixes.storage_endpoint,
data_disk['storage_container_name'],
data_disk['storage_blob_name']
)
if not data_disk.get('managed_disk_type'):
data_disk_managed_disk = None
disk_name = data_disk['storage_blob_name']
data_disk_vhd = self.compute_models.VirtualHardDisk(uri=data_disk_requested_vhd_uri)
else:
data_disk_vhd = None
data_disk_managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=data_disk['managed_disk_type'])
disk_name = self.name + "-datadisk-" + str(count)
count += 1
data_disk['caching'] = data_disk.get(
'caching', 'ReadOnly'
)
data_disks.append(self.compute_models.DataDisk(
lun=data_disk['lun'],
name=disk_name,
vhd=data_disk_vhd,
caching=data_disk['caching'],
create_option=self.compute_models.DiskCreateOptionTypes.empty,
disk_size_gb=data_disk['disk_size_gb'],
managed_disk=data_disk_managed_disk,
))
vm_resource.storage_profile.data_disks = data_disks
# Before creating VM accept terms of plan if `accept_terms` is True
if self.accept_terms is True:
if not self.plan or not all([self.plan.get('name'), self.plan.get('product'), self.plan.get('publisher')]):
self.fail("parameter error: plan must be specified and include name, product, and publisher")
try:
plan_name = self.plan.get('name')
plan_product = self.plan.get('product')
plan_publisher = self.plan.get('publisher')
term = self.marketplace_client.marketplace_agreements.get(
publisher_id=plan_publisher, offer_id=plan_product, plan_id=plan_name)
term.accepted = True
self.marketplace_client.marketplace_agreements.create(
publisher_id=plan_publisher, offer_id=plan_product, plan_id=plan_name, parameters=term)
except Exception as exc:
self.fail(("Error accepting terms for virtual machine {0} with plan {1}. " +
"Only service admin/account admin users can purchase images " +
"from the marketplace. - {2}").format(self.name, self.plan, str(exc)))
self.log("Create virtual machine with parameters:")
self.create_or_update_vm(vm_resource, 'all_autocreated' in self.remove_on_absent)
elif self.differences and len(self.differences) > 0:
# Update the VM based on detected config differences
self.log("Update virtual machine {0}".format(self.name))
self.results['actions'].append('Updated VM {0}'.format(self.name))
nics = [self.compute_models.NetworkInterfaceReference(id=interface['id'], primary=(i == 0))
for i, interface in enumerate(vm_dict['properties']['networkProfile']['networkInterfaces'])]
# os disk
if not vm_dict['properties']['storageProfile']['osDisk'].get('managedDisk'):
managed_disk = None
vhd = self.compute_models.VirtualHardDisk(uri=vm_dict['properties']['storageProfile']['osDisk'].get('vhd', {}).get('uri'))
else:
vhd = None
managed_disk = self.compute_models.ManagedDiskParameters(
storage_account_type=vm_dict['properties']['storageProfile']['osDisk']['managedDisk'].get('storageAccountType')
)
availability_set_resource = None
try:
availability_set_resource = self.compute_models.SubResource(id=vm_dict['properties']['availabilitySet'].get('id'))
except Exception:
# pass if the availability set is not set
pass
if 'imageReference' in vm_dict['properties']['storageProfile'].keys():
if 'id' in vm_dict['properties']['storageProfile']['imageReference'].keys():
image_reference = self.compute_models.ImageReference(
id=vm_dict['properties']['storageProfile']['imageReference']['id']
)
else:
image_reference = self.compute_models.ImageReference(
publisher=vm_dict['properties']['storageProfile']['imageReference'].get('publisher'),
offer=vm_dict['properties']['storageProfile']['imageReference'].get('offer'),
sku=vm_dict['properties']['storageProfile']['imageReference'].get('sku'),
version=vm_dict['properties']['storageProfile']['imageReference'].get('version')
)
else:
image_reference = None
# You can't change a vm zone
if vm_dict['zones'] != self.zones:
self.fail("You can't change the Availability Zone of a virtual machine (have: {0}, want: {1})".format(vm_dict['zones'], self.zones))
if 'osProfile' in vm_dict['properties']:
os_profile = self.compute_models.OSProfile(
admin_username=vm_dict['properties'].get('osProfile', {}).get('adminUsername'),
computer_name=vm_dict['properties'].get('osProfile', {}).get('computerName')
)
else:
os_profile = None
vm_resource = self.compute_models.VirtualMachine(
location=vm_dict['location'],
os_profile=os_profile,
hardware_profile=self.compute_models.HardwareProfile(
vm_size=vm_dict['properties']['hardwareProfile'].get('vmSize')
),
storage_profile=self.compute_models.StorageProfile(
os_disk=self.compute_models.OSDisk(
name=vm_dict['properties']['storageProfile']['osDisk'].get('name'),
vhd=vhd,
managed_disk=managed_disk,
create_option=vm_dict['properties']['storageProfile']['osDisk'].get('createOption'),
os_type=vm_dict['properties']['storageProfile']['osDisk'].get('osType'),
caching=vm_dict['properties']['storageProfile']['osDisk'].get('caching'),
disk_size_gb=vm_dict['properties']['storageProfile']['osDisk'].get('diskSizeGB')
),
image_reference=image_reference
),
availability_set=availability_set_resource,
network_profile=self.compute_models.NetworkProfile(
network_interfaces=nics
)
)
if self.license_type is not None:
vm_resource.license_type = self.license_type
if self.boot_diagnostics is not None:
vm_resource.diagnostics_profile = self.compute_models.DiagnosticsProfile(
boot_diagnostics=self.compute_models.BootDiagnostics(
enabled=vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']['enabled'],
storage_uri=vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']['storageUri']))
if vm_dict.get('tags'):
vm_resource.tags = vm_dict['tags']
# Add custom_data, if provided
if vm_dict['properties'].get('osProfile', {}).get('customData'):
custom_data = vm_dict['properties']['osProfile']['customData']
# Azure SDK (erroneously?) wants native string type for this
vm_resource.os_profile.custom_data = to_native(base64.b64encode(to_bytes(custom_data)))
# Add admin password, if one provided
if vm_dict['properties'].get('osProfile', {}).get('adminPassword'):
vm_resource.os_profile.admin_password = vm_dict['properties']['osProfile']['adminPassword']
# Add linux configuration, if applicable
linux_config = vm_dict['properties'].get('osProfile', {}).get('linuxConfiguration')
if linux_config:
ssh_config = linux_config.get('ssh', None)
vm_resource.os_profile.linux_configuration = self.compute_models.LinuxConfiguration(
disable_password_authentication=linux_config.get('disablePasswordAuthentication', False)
)
if ssh_config:
public_keys = ssh_config.get('publicKeys')
if public_keys:
vm_resource.os_profile.linux_configuration.ssh = self.compute_models.SshConfiguration(public_keys=[])
for key in public_keys:
vm_resource.os_profile.linux_configuration.ssh.public_keys.append(
self.compute_models.SshPublicKey(path=key['path'], key_data=key['keyData'])
)
# data disk
if vm_dict['properties']['storageProfile'].get('dataDisks'):
data_disks = []
for data_disk in vm_dict['properties']['storageProfile']['dataDisks']:
if data_disk.get('managedDisk'):
managed_disk_type = data_disk['managedDisk'].get('storageAccountType')
data_disk_managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=managed_disk_type)
data_disk_vhd = None
else:
data_disk_vhd = data_disk['vhd']['uri']
data_disk_managed_disk = None
data_disks.append(self.compute_models.DataDisk(
lun=int(data_disk['lun']),
name=data_disk.get('name'),
vhd=data_disk_vhd,
caching=data_disk.get('caching'),
create_option=data_disk.get('createOption'),
disk_size_gb=int(data_disk['diskSizeGB']),
managed_disk=data_disk_managed_disk,
))
vm_resource.storage_profile.data_disks = data_disks
self.log("Update virtual machine with parameters:")
self.create_or_update_vm(vm_resource, False)
# Make sure we leave the machine in requested power state
if (powerstate_change == 'poweron' and
self.results['ansible_facts']['azure_vm']['powerstate'] != 'running'):
# Attempt to power on the machine
self.power_on_vm()
elif (powerstate_change == 'poweroff' and
self.results['ansible_facts']['azure_vm']['powerstate'] == 'running'):
# Attempt to power off the machine
self.power_off_vm()
elif powerstate_change == 'restarted':
self.restart_vm()
elif powerstate_change == 'deallocated':
self.deallocate_vm()
elif powerstate_change == 'generalized':
self.power_off_vm()
self.generalize_vm()
self.results['ansible_facts']['azure_vm'] = self.serialize_vm(self.get_vm())
elif self.state == 'absent':
# delete the VM
self.log("Delete virtual machine {0}".format(self.name))
self.results['ansible_facts']['azure_vm'] = None
self.delete_vm(vm)
# until we sort out how we want to do this globally
del self.results['actions']
return self.results
def get_vm(self):
'''
Get the VM with expanded instanceView
:return: VirtualMachine object
'''
try:
vm = self.compute_client.virtual_machines.get(self.resource_group, self.name, expand='instanceview')
return vm
except Exception as exc:
self.fail("Error getting virtual machine {0} - {1}".format(self.name, str(exc)))
def serialize_vm(self, vm):
'''
Convert a VirtualMachine object to dict.
:param vm: VirtualMachine object
:return: dict
'''
result = self.serialize_obj(vm, AZURE_OBJECT_CLASS, enum_modules=AZURE_ENUM_MODULES)
result['id'] = vm.id
result['name'] = vm.name
result['type'] = vm.type
result['location'] = vm.location
result['tags'] = vm.tags
result['powerstate'] = dict()
if vm.instance_view:
result['powerstate'] = next((s.code.replace('PowerState/', '')
for s in vm.instance_view.statuses if s.code.startswith('PowerState')), None)
for s in vm.instance_view.statuses:
if s.code.lower() == "osstate/generalized":
result['powerstate'] = 'generalized'
# Expand network interfaces to include config properties
for interface in vm.network_profile.network_interfaces:
int_dict = azure_id_to_dict(interface.id)
nic = self.get_network_interface(int_dict['resourceGroups'], int_dict['networkInterfaces'])
for interface_dict in result['properties']['networkProfile']['networkInterfaces']:
if interface_dict['id'] == interface.id:
nic_dict = self.serialize_obj(nic, 'NetworkInterface')
interface_dict['name'] = int_dict['networkInterfaces']
interface_dict['properties'] = nic_dict['properties']
# Expand public IPs to include config properties
for interface in result['properties']['networkProfile']['networkInterfaces']:
for config in interface['properties']['ipConfigurations']:
if config['properties'].get('publicIPAddress'):
pipid_dict = azure_id_to_dict(config['properties']['publicIPAddress']['id'])
try:
pip = self.network_client.public_ip_addresses.get(pipid_dict['resourceGroups'],
pipid_dict['publicIPAddresses'])
except Exception as exc:
self.fail("Error fetching public ip {0} - {1}".format(pipid_dict['publicIPAddresses'],
str(exc)))
pip_dict = self.serialize_obj(pip, 'PublicIPAddress')
config['properties']['publicIPAddress']['name'] = pipid_dict['publicIPAddresses']
config['properties']['publicIPAddress']['properties'] = pip_dict['properties']
self.log(result, pretty_print=True)
if self.state != 'absent' and not result['powerstate']:
self.fail("Failed to determine PowerState of virtual machine {0}".format(self.name))
return result
def power_off_vm(self):
self.log("Powered off virtual machine {0}".format(self.name))
self.results['actions'].append("Powered off virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.power_off(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error powering off virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def power_on_vm(self):
self.results['actions'].append("Powered on virtual machine {0}".format(self.name))
self.log("Power on virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.start(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error powering on virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def restart_vm(self):
self.results['actions'].append("Restarted virtual machine {0}".format(self.name))
self.log("Restart virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.restart(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error restarting virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def deallocate_vm(self):
self.results['actions'].append("Deallocated virtual machine {0}".format(self.name))
self.log("Deallocate virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.deallocate(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deallocating virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def generalize_vm(self):
self.results['actions'].append("Generalize virtual machine {0}".format(self.name))
self.log("Generalize virtual machine {0}".format(self.name))
try:
response = self.compute_client.virtual_machines.generalize(self.resource_group, self.name)
if isinstance(response, LROPoller):
self.get_poller_result(response)
except Exception as exc:
self.fail("Error generalizing virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def remove_autocreated_resources(self, tags):
if tags:
sa_name = tags.get('_own_sa_')
nic_name = tags.get('_own_nic_')
pip_name = tags.get('_own_pip_')
nsg_name = tags.get('_own_nsg_')
if sa_name:
self.delete_storage_account(self.resource_group, sa_name)
if nic_name:
self.delete_nic(self.resource_group, nic_name)
if pip_name:
self.delete_pip(self.resource_group, pip_name)
if nsg_name:
self.delete_nsg(self.resource_group, nsg_name)
def delete_vm(self, vm):
vhd_uris = []
managed_disk_ids = []
nic_names = []
pip_names = []
if 'all_autocreated' not in self.remove_on_absent:
if self.remove_on_absent.intersection(set(['all', 'virtual_storage'])):
# store the attached vhd info so we can nuke it after the VM is gone
if(vm.storage_profile.os_disk.managed_disk):
self.log('Storing managed disk ID for deletion')
managed_disk_ids.append(vm.storage_profile.os_disk.managed_disk.id)
elif(vm.storage_profile.os_disk.vhd):
self.log('Storing VHD URI for deletion')
vhd_uris.append(vm.storage_profile.os_disk.vhd.uri)
data_disks = vm.storage_profile.data_disks
for data_disk in data_disks:
if data_disk is not None:
if(data_disk.vhd):
vhd_uris.append(data_disk.vhd.uri)
elif(data_disk.managed_disk):
managed_disk_ids.append(data_disk.managed_disk.id)
# FUTURE enable diff mode, move these there...
self.log("VHD URIs to delete: {0}".format(', '.join(vhd_uris)))
self.results['deleted_vhd_uris'] = vhd_uris
self.log("Managed disk IDs to delete: {0}".format(', '.join(managed_disk_ids)))
self.results['deleted_managed_disk_ids'] = managed_disk_ids
if self.remove_on_absent.intersection(set(['all', 'network_interfaces'])):
# store the attached nic info so we can nuke them after the VM is gone
self.log('Storing NIC names for deletion.')
for interface in vm.network_profile.network_interfaces:
id_dict = azure_id_to_dict(interface.id)
nic_names.append(dict(name=id_dict['networkInterfaces'], resource_group=id_dict['resourceGroups']))
self.log('NIC names to delete {0}'.format(str(nic_names)))
self.results['deleted_network_interfaces'] = nic_names
if self.remove_on_absent.intersection(set(['all', 'public_ips'])):
# also store each nic's attached public IPs and delete after the NIC is gone
for nic_dict in nic_names:
nic = self.get_network_interface(nic_dict['resource_group'], nic_dict['name'])
for ipc in nic.ip_configurations:
if ipc.public_ip_address:
pip_dict = azure_id_to_dict(ipc.public_ip_address.id)
pip_names.append(dict(name=pip_dict['publicIPAddresses'], resource_group=pip_dict['resourceGroups']))
self.log('Public IPs to delete are {0}'.format(str(pip_names)))
self.results['deleted_public_ips'] = pip_names
self.log("Deleting virtual machine {0}".format(self.name))
self.results['actions'].append("Deleted virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.delete(self.resource_group, self.name)
# wait for the poller to finish
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting virtual machine {0} - {1}".format(self.name, str(exc)))
# TODO: parallelize nic, vhd, and public ip deletions with begin_deleting
# TODO: best-effort to keep deleting other linked resources if we encounter an error
if self.remove_on_absent.intersection(set(['all', 'virtual_storage'])):
self.log('Deleting VHDs')
self.delete_vm_storage(vhd_uris)
self.log('Deleting managed disks')
self.delete_managed_disks(managed_disk_ids)
if 'all' in self.remove_on_absent or 'all_autocreated' in self.remove_on_absent:
self.remove_autocreated_resources(vm.tags)
if self.remove_on_absent.intersection(set(['all', 'network_interfaces'])):
self.log('Deleting network interfaces')
for nic_dict in nic_names:
self.delete_nic(nic_dict['resource_group'], nic_dict['name'])
if self.remove_on_absent.intersection(set(['all', 'public_ips'])):
self.log('Deleting public IPs')
for pip_dict in pip_names:
self.delete_pip(pip_dict['resource_group'], pip_dict['name'])
if 'all' in self.remove_on_absent or 'all_autocreated' in self.remove_on_absent:
self.remove_autocreated_resources(vm.tags)
return True
def get_network_interface(self, resource_group, name):
try:
nic = self.network_client.network_interfaces.get(resource_group, name)
return nic
except Exception as exc:
self.fail("Error fetching network interface {0} - {1}".format(name, str(exc)))
return True
def delete_nic(self, resource_group, name):
self.log("Deleting network interface {0}".format(name))
self.results['actions'].append("Deleted network interface {0}".format(name))
try:
poller = self.network_client.network_interfaces.delete(resource_group, name)
except Exception as exc:
self.fail("Error deleting network interface {0} - {1}".format(name, str(exc)))
self.get_poller_result(poller)
# Delete doesn't return anything. If we get this far, assume success
return True
def delete_pip(self, resource_group, name):
self.results['actions'].append("Deleted public IP {0}".format(name))
try:
poller = self.network_client.public_ip_addresses.delete(resource_group, name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting {0} - {1}".format(name, str(exc)))
# Delete returns nada. If we get here, assume that all is well.
return True
def delete_nsg(self, resource_group, name):
self.results['actions'].append("Deleted NSG {0}".format(name))
try:
poller = self.network_client.network_security_groups.delete(resource_group, name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting {0} - {1}".format(name, str(exc)))
return True
def delete_managed_disks(self, managed_disk_ids):
for mdi in managed_disk_ids:
try:
poller = self.rm_client.resources.delete_by_id(mdi, '2017-03-30')
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting managed disk {0} - {1}".format(mdi, str(exc)))
return True
def delete_storage_account(self, resource_group, name):
self.log("Delete storage account {0}".format(name))
self.results['actions'].append("Deleted storage account {0}".format(name))
try:
self.storage_client.storage_accounts.delete(self.resource_group, name)
except Exception as exc:
self.fail("Error deleting storage account {0} - {1}".format(name, str(exc)))
return True
def delete_vm_storage(self, vhd_uris):
# FUTURE: figure out a cloud_env independent way to delete these
for uri in vhd_uris:
self.log("Extracting info from blob uri '{0}'".format(uri))
try:
blob_parts = extract_names_from_blob_uri(uri, self._cloud_environment.suffixes.storage_endpoint)
except Exception as exc:
self.fail("Error parsing blob URI {0}".format(str(exc)))
storage_account_name = blob_parts['accountname']
container_name = blob_parts['containername']
blob_name = blob_parts['blobname']
blob_client = self.get_blob_client(self.resource_group, storage_account_name)
self.log("Delete blob {0}:{1}".format(container_name, blob_name))
self.results['actions'].append("Deleted blob {0}:{1}".format(container_name, blob_name))
try:
blob_client.delete_blob(container_name, blob_name)
except Exception as exc:
self.fail("Error deleting blob {0}:{1} - {2}".format(container_name, blob_name, str(exc)))
return True
def get_marketplace_image_version(self):
try:
versions = self.compute_client.virtual_machine_images.list(self.location,
self.image['publisher'],
self.image['offer'],
self.image['sku'])
except Exception as exc:
self.fail("Error fetching image {0} {1} {2} - {3}".format(self.image['publisher'],
self.image['offer'],
self.image['sku'],
str(exc)))
if versions and len(versions) > 0:
if self.image['version'] == 'latest':
return versions[len(versions) - 1]
for version in versions:
if version.name == self.image['version']:
return version
self.fail("Error could not find image {0} {1} {2} {3}".format(self.image['publisher'],
self.image['offer'],
self.image['sku'],
self.image['version']))
return None
def get_custom_image_reference(self, name, resource_group=None):
try:
if resource_group:
vm_images = self.compute_client.images.list_by_resource_group(resource_group)
else:
vm_images = self.compute_client.images.list()
except Exception as exc:
self.fail("Error fetching custom images from subscription - {0}".format(str(exc)))
for vm_image in vm_images:
if vm_image.name == name:
self.log("Using custom image id {0}".format(vm_image.id))
return self.compute_models.ImageReference(id=vm_image.id)
self.fail("Error could not find image with name {0}".format(name))
return None
def get_availability_set(self, resource_group, name):
try:
return self.compute_client.availability_sets.get(resource_group, name)
except Exception as exc:
self.fail("Error fetching availability set {0} - {1}".format(name, str(exc)))
def get_storage_account(self, name):
try:
account = self.storage_client.storage_accounts.get_properties(self.resource_group,
name)
return account
except Exception as exc:
self.fail("Error fetching storage account {0} - {1}".format(name, str(exc)))
def create_or_update_vm(self, params, remove_autocreated_on_failure):
try:
poller = self.compute_client.virtual_machines.create_or_update(self.resource_group, self.name, params)
self.get_poller_result(poller)
except Exception as exc:
if remove_autocreated_on_failure:
self.remove_autocreated_resources(params.tags)
self.fail("Error creating or updating virtual machine {0} - {1}".format(self.name, str(exc)))
def vm_size_is_valid(self):
'''
Validate self.vm_size against the list of virtual machine sizes available for the account and location.
:return: boolean
'''
try:
sizes = self.compute_client.virtual_machine_sizes.list(self.location)
except Exception as exc:
self.fail("Error retrieving available machine sizes - {0}".format(str(exc)))
for size in sizes:
if size.name == self.vm_size:
return True
return False
def create_default_storage_account(self, vm_dict=None):
'''
Create (once) a default storage account <vm name>XXXX, where XXXX is a random number.
NOTE: If <vm name>XXXX exists, use it instead of failing. Highly unlikely.
If this method is called multiple times across executions it will return the same
storage account created with the random name which is stored in a tag on the VM.
vm_dict is passed in during an update, so we can obtain the _own_sa_ tag and return
the default storage account we created in a previous invocation
:return: storage account object
'''
account = None
valid_name = False
if self.tags is None:
self.tags = {}
if self.tags.get('_own_sa_', None):
# We previously created one in the same invocation
return self.get_storage_account(self.tags['_own_sa_'])
if vm_dict and vm_dict.get('tags', {}).get('_own_sa_', None):
# We previously created one in a previous invocation
# We must be updating, like adding boot diagnostics
return self.get_storage_account(vm_dict['tags']['_own_sa_'])
# Attempt to find a valid storage account name
storage_account_name_base = re.sub('[^a-zA-Z0-9]', '', self.name[:20].lower())
for i in range(0, 5):
rand = random.randrange(1000, 9999)
storage_account_name = storage_account_name_base + str(rand)
if self.check_storage_account_name(storage_account_name):
valid_name = True
break
if not valid_name:
self.fail("Failed to create a unique storage account name for {0}. Try using a different VM name."
.format(self.name))
try:
account = self.storage_client.storage_accounts.get_properties(self.resource_group, storage_account_name)
except CloudError:
pass
if account:
self.log("Storage account {0} found.".format(storage_account_name))
self.check_provisioning_state(account)
return account
sku = self.storage_models.Sku(name=self.storage_models.SkuName.standard_lrs)
sku.tier = self.storage_models.SkuTier.standard
kind = self.storage_models.Kind.storage
parameters = self.storage_models.StorageAccountCreateParameters(sku=sku, kind=kind, location=self.location)
self.log("Creating storage account {0} in location {1}".format(storage_account_name, self.location))
self.results['actions'].append("Created storage account {0}".format(storage_account_name))
try:
poller = self.storage_client.storage_accounts.create(self.resource_group, storage_account_name, parameters)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Failed to create storage account: {0} - {1}".format(storage_account_name, str(exc)))
self.tags['_own_sa_'] = storage_account_name
return self.get_storage_account(storage_account_name)
def check_storage_account_name(self, name):
self.log("Checking storage account name availability for {0}".format(name))
try:
response = self.storage_client.storage_accounts.check_name_availability(name)
if response.reason == 'AccountNameInvalid':
raise Exception("Invalid default storage account name: {0}".format(name))
except Exception as exc:
self.fail("Error checking storage account name availability for {0} - {1}".format(name, str(exc)))
return response.name_available
def create_default_nic(self):
'''
Create a default Network Interface <vm name>01. Requires an existing virtual network
with one subnet. If NIC <vm name>01 exists, use it. Otherwise, create one.
:return: NIC object
'''
network_interface_name = self.name + '01'
nic = None
if self.tags is None:
self.tags = {}
self.log("Create default NIC {0}".format(network_interface_name))
self.log("Check to see if NIC {0} exists".format(network_interface_name))
try:
nic = self.network_client.network_interfaces.get(self.resource_group, network_interface_name)
except CloudError:
pass
if nic:
self.log("NIC {0} found.".format(network_interface_name))
self.check_provisioning_state(nic)
return nic
self.log("NIC {0} does not exist.".format(network_interface_name))
virtual_network_resource_group = None
if self.virtual_network_resource_group:
virtual_network_resource_group = self.virtual_network_resource_group
else:
virtual_network_resource_group = self.resource_group
if self.virtual_network_name:
try:
self.network_client.virtual_networks.list(virtual_network_resource_group, self.virtual_network_name)
virtual_network_name = self.virtual_network_name
except CloudError as exc:
self.fail("Error: fetching virtual network {0} - {1}".format(self.virtual_network_name, str(exc)))
else:
# Find a virtual network
no_vnets_msg = "Error: unable to find virtual network in resource group {0}. A virtual network " \
"with at least one subnet must exist in order to create a NIC for the virtual " \
"machine.".format(virtual_network_resource_group)
virtual_network_name = None
try:
vnets = self.network_client.virtual_networks.list(virtual_network_resource_group)
except CloudError:
self.log('cloud error!')
self.fail(no_vnets_msg)
for vnet in vnets:
virtual_network_name = vnet.name
self.log('vnet name: {0}'.format(vnet.name))
break
if not virtual_network_name:
self.fail(no_vnets_msg)
if self.subnet_name:
try:
subnet = self.network_client.subnets.get(virtual_network_resource_group, virtual_network_name, self.subnet_name)
subnet_id = subnet.id
except Exception as exc:
self.fail("Error: fetching subnet {0} - {1}".format(self.subnet_name, str(exc)))
else:
no_subnets_msg = "Error: unable to find a subnet in virtual network {0}. A virtual network " \
"with at least one subnet must exist in order to create a NIC for the virtual " \
"machine.".format(virtual_network_name)
subnet_id = None
try:
subnets = self.network_client.subnets.list(virtual_network_resource_group, virtual_network_name)
except CloudError:
self.fail(no_subnets_msg)
for subnet in subnets:
subnet_id = subnet.id
self.log('subnet id: {0}'.format(subnet_id))
break
if not subnet_id:
self.fail(no_subnets_msg)
pip = None
if self.public_ip_allocation_method != 'Disabled':
self.results['actions'].append('Created default public IP {0}'.format(self.name + '01'))
sku = self.network_models.PublicIPAddressSku(name="Standard") if self.zones else None
pip_facts = self.create_default_pip(self.resource_group, self.location, self.name + '01', self.public_ip_allocation_method, sku=sku)
pip = self.network_models.PublicIPAddress(id=pip_facts.id, location=pip_facts.location, resource_guid=pip_facts.resource_guid, sku=sku)
self.tags['_own_pip_'] = self.name + '01'
self.results['actions'].append('Created default security group {0}'.format(self.name + '01'))
group = self.create_default_securitygroup(self.resource_group, self.location, self.name + '01', self.os_type,
self.open_ports)
self.tags['_own_nsg_'] = self.name + '01'
parameters = self.network_models.NetworkInterface(
location=self.location,
ip_configurations=[
self.network_models.NetworkInterfaceIPConfiguration(
private_ip_allocation_method='Dynamic',
)
]
)
parameters.ip_configurations[0].subnet = self.network_models.Subnet(id=subnet_id)
parameters.ip_configurations[0].name = 'default'
parameters.network_security_group = self.network_models.NetworkSecurityGroup(id=group.id,
location=group.location,
resource_guid=group.resource_guid)
parameters.ip_configurations[0].public_ip_address = pip
self.log("Creating NIC {0}".format(network_interface_name))
self.log(self.serialize_obj(parameters, 'NetworkInterface'), pretty_print=True)
self.results['actions'].append("Created NIC {0}".format(network_interface_name))
try:
poller = self.network_client.network_interfaces.create_or_update(self.resource_group,
network_interface_name,
parameters)
new_nic = self.get_poller_result(poller)
self.tags['_own_nic_'] = network_interface_name
except Exception as exc:
self.fail("Error creating network interface {0} - {1}".format(network_interface_name, str(exc)))
return new_nic
def parse_network_interface(self, nic):
nic = self.parse_resource_to_dict(nic)
if 'name' not in nic:
self.fail("Invalid network interface {0}".format(str(nic)))
return format_resource_id(val=nic['name'],
subscription_id=nic['subscription_id'],
resource_group=nic['resource_group'],
namespace='Microsoft.Network',
types='networkInterfaces')
def main():
AzureRMVirtualMachine()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,297 |
env lookup plugin error with utf8 chars in the variable value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The env lookup plugin fails with an error when the environment variable value contains an UTF8 character.
The problem exists since Ansible 2.9 and only with Python 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
env lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Ubuntu 18.04.3 LTS
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```shell
TESTENVVAR=éáúőúöüó ansible all -i localhost, -m debug -a msg="{{ lookup('env','TESTENVVAR') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
localhost | SUCCESS => {
"msg": "éáúőúöüó"
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
localhost | FAILED! => {
"msg": "the field 'args' has an invalid value ({u'msg': u\"{{ lookup('env','TESTENVVAR') }}\"}), and could not be converted to an dict.The error was: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)"
}
```
|
https://github.com/ansible/ansible/issues/65297
|
https://github.com/ansible/ansible/pull/65541
|
05e2e1806162393d76542a75c2520c7d61c2d855
|
2fa8f9cfd80daf32c7d222190edf7cfc7234582a
| 2019-11-26T22:34:12Z |
python
| 2019-12-25T11:54:38Z |
changelogs/fragments/65541-fix-utf8-issue-env-lookup.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,297 |
env lookup plugin error with utf8 chars in the variable value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The env lookup plugin fails with an error when the environment variable value contains an UTF8 character.
The problem exists since Ansible 2.9 and only with Python 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
env lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Ubuntu 18.04.3 LTS
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```shell
TESTENVVAR=éáúőúöüó ansible all -i localhost, -m debug -a msg="{{ lookup('env','TESTENVVAR') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
localhost | SUCCESS => {
"msg": "éáúőúöüó"
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
localhost | FAILED! => {
"msg": "the field 'args' has an invalid value ({u'msg': u\"{{ lookup('env','TESTENVVAR') }}\"}), and could not be converted to an dict.The error was: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)"
}
```
|
https://github.com/ansible/ansible/issues/65297
|
https://github.com/ansible/ansible/pull/65541
|
05e2e1806162393d76542a75c2520c7d61c2d855
|
2fa8f9cfd80daf32c7d222190edf7cfc7234582a
| 2019-11-26T22:34:12Z |
python
| 2019-12-25T11:54:38Z |
lib/ansible/plugins/lookup/env.py
|
# (c) 2012, Jan-Piet Mens <jpmens(at)gmail.com>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: env
author: Jan-Piet Mens (@jpmens) <jpmens(at)gmail.com>
version_added: "0.9"
short_description: read the value of environment variables
description:
- Allows you to query the environment variables available on the controller when you invoked Ansible.
options:
_terms:
description: Environment variable or list of them to lookup the values for
required: True
"""
EXAMPLES = """
- debug: msg="{{ lookup('env','HOME') }} is an environment variable"
"""
RETURN = """
_list:
description:
- values from the environment variables.
type: list
"""
import os
from ansible.plugins.lookup import LookupBase
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
ret = []
for term in terms:
var = term.split()[0]
ret.append(os.getenv(var, ''))
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,297 |
env lookup plugin error with utf8 chars in the variable value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The env lookup plugin fails with an error when the environment variable value contains an UTF8 character.
The problem exists since Ansible 2.9 and only with Python 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
env lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Ubuntu 18.04.3 LTS
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```shell
TESTENVVAR=éáúőúöüó ansible all -i localhost, -m debug -a msg="{{ lookup('env','TESTENVVAR') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
localhost | SUCCESS => {
"msg": "éáúőúöüó"
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
localhost | FAILED! => {
"msg": "the field 'args' has an invalid value ({u'msg': u\"{{ lookup('env','TESTENVVAR') }}\"}), and could not be converted to an dict.The error was: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)"
}
```
|
https://github.com/ansible/ansible/issues/65297
|
https://github.com/ansible/ansible/pull/65541
|
05e2e1806162393d76542a75c2520c7d61c2d855
|
2fa8f9cfd80daf32c7d222190edf7cfc7234582a
| 2019-11-26T22:34:12Z |
python
| 2019-12-25T11:54:38Z |
test/integration/targets/lookups/runme.sh
|
#!/usr/bin/env bash
set -eux
source virtualenv.sh
# Requirements have to be installed prior to running ansible-playbook
# because plugins and requirements are loaded before the task runs
pip install passlib
ANSIBLE_ROLES_PATH=../ ansible-playbook lookups.yml "$@"
ansible-playbook template_lookup_vaulted.yml --vault-password-file test_vault_pass "$@"
ansible-playbook -i template_deepcopy/hosts template_deepcopy/playbook.yml "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,297 |
env lookup plugin error with utf8 chars in the variable value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The env lookup plugin fails with an error when the environment variable value contains an UTF8 character.
The problem exists since Ansible 2.9 and only with Python 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
env lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Ubuntu 18.04.3 LTS
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```shell
TESTENVVAR=éáúőúöüó ansible all -i localhost, -m debug -a msg="{{ lookup('env','TESTENVVAR') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
localhost | SUCCESS => {
"msg": "éáúőúöüó"
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
localhost | FAILED! => {
"msg": "the field 'args' has an invalid value ({u'msg': u\"{{ lookup('env','TESTENVVAR') }}\"}), and could not be converted to an dict.The error was: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)"
}
```
|
https://github.com/ansible/ansible/issues/65297
|
https://github.com/ansible/ansible/pull/65541
|
05e2e1806162393d76542a75c2520c7d61c2d855
|
2fa8f9cfd80daf32c7d222190edf7cfc7234582a
| 2019-11-26T22:34:12Z |
python
| 2019-12-25T11:54:38Z |
test/integration/targets/lookups/tasks/main.yml
|
# test code for lookup plugins
# Copyright: (c) 2014, James Tanner <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# FILE LOOKUP
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: make a new file to read
copy: dest={{output_dir}}/foo.txt mode=0644 content="bar"
- name: load the file as a fact
set_fact:
foo: "{{ lookup('file', output_dir + '/foo.txt' ) }}"
- debug: var=foo
- name: verify file lookup
assert:
that:
- "foo == 'bar'"
# PASSWORD LOOKUP
- name: remove previous password files and directory
file: dest={{item}} state=absent
with_items:
- "{{output_dir}}/lookup/password"
- "{{output_dir}}/lookup/password_with_salt"
- "{{output_dir}}/lookup"
- name: create a password file
set_fact:
newpass: "{{ lookup('password', output_dir + '/lookup/password length=8') }}"
- name: stat the password file directory
stat: path="{{output_dir}}/lookup"
register: result
- name: assert the directory's permissions
assert:
that:
- result.stat.mode == '0700'
- name: stat the password file
stat: path="{{output_dir}}/lookup/password"
register: result
- name: assert the directory's permissions
assert:
that:
- result.stat.mode == '0600'
- name: get password length
shell: wc -c {{output_dir}}/lookup/password | awk '{print $1}'
register: wc_result
- debug: var=wc_result.stdout
- name: read password
shell: cat {{output_dir}}/lookup/password
register: cat_result
- debug: var=cat_result.stdout
- name: verify password
assert:
that:
- "wc_result.stdout == '9'"
- "cat_result.stdout == newpass"
- "' salt=' not in cat_result.stdout"
- name: fetch password from an existing file
set_fact:
pass2: "{{ lookup('password', output_dir + '/lookup/password length=8') }}"
- name: read password (again)
shell: cat {{output_dir}}/lookup/password
register: cat_result2
- debug: var=cat_result2.stdout
- name: verify password (again)
assert:
that:
- "cat_result2.stdout == newpass"
- "' salt=' not in cat_result2.stdout"
- name: create a password (with salt) file
debug: msg={{ lookup('password', output_dir + '/lookup/password_with_salt encrypt=sha256_crypt') }}
- name: read password and salt
shell: cat {{output_dir}}/lookup/password_with_salt
register: cat_pass_salt
- debug: var=cat_pass_salt.stdout
- name: fetch unencrypted password
set_fact:
newpass: "{{ lookup('password', output_dir + '/lookup/password_with_salt') }}"
- debug: var=newpass
- name: verify password and salt
assert:
that:
- "cat_pass_salt.stdout != newpass"
- "cat_pass_salt.stdout.startswith(newpass)"
- "' salt=' in cat_pass_salt.stdout"
- "' salt=' not in newpass"
- name: fetch unencrypted password (using empty encrypt parameter)
set_fact:
newpass2: "{{ lookup('password', output_dir + '/lookup/password_with_salt encrypt=') }}"
- name: verify lookup password behavior
assert:
that:
- "newpass == newpass2"
- name: verify that we can generate a 1st password without writing it
set_fact:
newpass: "{{ lookup('password', '/dev/null') }}"
- name: verify that we can generate a 2nd password without writing it
set_fact:
newpass2: "{{ lookup('password', '/dev/null') }}"
- name: verify lookup password behavior with /dev/null
assert:
that:
- "newpass != newpass2"
# ENV LOOKUP
- name: get HOME environment var value
shell: "echo $HOME"
register: home_var_value
- name: use env lookup to get HOME var
set_fact:
test_val: "{{ lookup('env', 'HOME') }}"
- debug: var=home_var_value.stdout
- debug: var=test_val
- name: compare values
assert:
that:
- "test_val == home_var_value.stdout"
# PIPE LOOKUP
# https://github.com/ansible/ansible/issues/6550
- name: confirm pipe lookup works with a single positional arg
debug: msg="{{ lookup('pipe', 'ls') }}"
# LOOKUP TEMPLATING
- name: use bare interpolation
debug: msg="got {{item}}"
with_items: "{{things1}}"
register: bare_var
- name: verify that list was interpolated
assert:
that:
- "bare_var.results[0].item == 1"
- "bare_var.results[1].item == 2"
- name: use list with bare strings in it
debug: msg={{item}}
with_items:
- things2
- things1
- name: use list with undefined var in it
debug: msg={{item}}
with_items: "{{things2}}"
ignore_errors: True
# BUG #10073 nested template handling
- name: set variable that clashes
set_fact:
PATH: foobar
- name: get PATH environment var value
set_fact:
known_var_value: "{{ lookup('pipe', 'echo $PATH') }}"
- name: do the lookup for env PATH
set_fact:
test_val: "{{ lookup('env', 'PATH') }}"
- debug: var=test_val
- name: compare values
assert:
that:
- "test_val != ''"
- "test_val == known_var_value"
- name: set with_dict
shell: echo "{{ item.key + '=' + item.value }}"
with_dict: "{{ mydict }}"
# URL Lookups
- name: Test that retrieving a url works
set_fact:
web_data: "{{ lookup('url', 'https://gist.githubusercontent.com/abadger/9858c22712f62a8effff/raw/43dd47ea691c90a5fa7827892c70241913351963/test') }}"
- name: Assert that the url was retrieved
assert:
that:
- "'one' in web_data"
- name: Test that retrieving a url with invalid cert fails
set_fact:
web_data: "{{ lookup('url', 'https://{{ badssl_host }}/') }}"
ignore_errors: True
register: url_invalid_cert
- assert:
that:
- "url_invalid_cert.failed"
- "'Error validating the server' in url_invalid_cert.msg or 'Hostname mismatch' in url_invalid_cert.msg or ( url_invalid_cert.msg is search('hostname .* doesn.t match .*'))"
- name: Test that retrieving a url with invalid cert with validate_certs=False works
set_fact:
web_data: "{{ lookup('url', 'https://{{ badssl_host }}/', validate_certs=False) }}"
register: url_no_validate_cert
- assert:
that:
- "'{{ badssl_host_substring }}' in web_data"
- name: Test cartesian lookup
debug: var=item
with_cartesian:
- ["A", "B", "C"]
- ["1", "2", "3"]
register: product
- name: Verify cartesian lookup
assert:
that:
- product.results[0]['item'] == ["A", "1"]
- product.results[1]['item'] == ["A", "2"]
- product.results[2]['item'] == ["A", "3"]
- product.results[3]['item'] == ["B", "1"]
- product.results[4]['item'] == ["B", "2"]
- product.results[5]['item'] == ["B", "3"]
- product.results[6]['item'] == ["C", "1"]
- product.results[7]['item'] == ["C", "2"]
- product.results[8]['item'] == ["C", "3"]
# Template lookups
# ref #18526
- name: Test that we have a proper jinja search path in template lookup
set_fact:
hello_world: "{{ lookup('template', 'hello.txt') }}"
- assert:
that:
- "hello_world|trim == 'Hello world!'"
- name: Test that we have a proper jinja search path in template lookup with different variable start and end string
vars:
my_var: world
set_fact:
hello_world_string: "{{ lookup('template', 'hello_string.txt', variable_start_string='[%', variable_end_string='%]') }}"
- assert:
that:
- "hello_world_string|trim == 'Hello world!'"
# Vars lookups
- name: Test that we can give it a single value and receive a single value
set_fact:
var_host: '{{ lookup("vars", "ansible_host") }}'
- assert:
that:
- 'var_host == ansible_host'
- name: Test that we can give a list of values to var and receive a list of values back
set_fact:
var_host_info: '{{ query("vars", "ansible_host", "ansible_connection") }}'
- assert:
that:
- 'var_host_info[0] == ansible_host'
- 'var_host_info[1] == ansible_connection'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,297 |
env lookup plugin error with utf8 chars in the variable value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The env lookup plugin fails with an error when the environment variable value contains an UTF8 character.
The problem exists since Ansible 2.9 and only with Python 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
env lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/admin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Ubuntu 18.04.3 LTS
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```shell
TESTENVVAR=éáúőúöüó ansible all -i localhost, -m debug -a msg="{{ lookup('env','TESTENVVAR') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
localhost | SUCCESS => {
"msg": "éáúőúöüó"
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
localhost | FAILED! => {
"msg": "the field 'args' has an invalid value ({u'msg': u\"{{ lookup('env','TESTENVVAR') }}\"}), and could not be converted to an dict.The error was: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)"
}
```
|
https://github.com/ansible/ansible/issues/65297
|
https://github.com/ansible/ansible/pull/65541
|
05e2e1806162393d76542a75c2520c7d61c2d855
|
2fa8f9cfd80daf32c7d222190edf7cfc7234582a
| 2019-11-26T22:34:12Z |
python
| 2019-12-25T11:54:38Z |
test/units/plugins/lookup/test_env.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,981 |
zabbix_host module not idempotent with host_groups
|
##### SUMMARY
Using Zabbix 4.0 Server, when a list of `host_groups` is provided for the zabbix_host module, it is not idempotent, but the behavior only appears when using Host Groups which are already existent on the Zabbix Server (ex. "Linux Server"). If i create own Host Groups on the Zabbix Server then the Module behaves accordingly and the task does not change on the second run.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_host module
##### ANSIBLE VERSION
```
ansible 2.7.8
config file = /home/userx/git/somefolder/ansible-test/ansible.cfg
configured module search path = ['/home/userx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Running the Playbook with already existing Host Groups (like "Linux Servers" or "Hypervisors") on Zabbix the task keeps changing all the time.
```yaml
- name: register zabbix host on the server
vars:
zabbix_agent_login_user: "admin"
zabbix_agent_login_password: "1234"
delegate_to: zabbix-server.example.com
zabbix_host:
server_url: "http://{{ zabbix_agent_server }}/"
login_user: "{{ zabbix_agent_login_user }}"
login_password: "{{ zabbix_agent_login_password }}"
host_name: "{{ ansible_fqdn }}"
interfaces:
- type: 1 #agent
main: 1
useip: 1
port: 10050
ip: "{{ ansible_default_ipv4['address'] }}"
dns: "{{ ansible_fqdn }}"
visible_name: "{{ ansible_fqdn }}"
status: enabled
state: present
host_groups:
- Linux Servers
- Hypervisors
link_templates:
- Template OS Linux
- Template App SSH Service
- Template Module ICMP Ping
notify: restart zabbix-agent
```
This resuls in
```
TASK [Gathering Facts] **********************************************************************************************************************************************************************************************
ok: [zabbix-client.example.com]
TASK [zabbix_agent : register zabbix host on the server] ************************************************************************************************************************************************************
changed: [zabbix-client.example.com -> zabbix-server.example.com]
RUNNING HANDLER [zabbix_agent : restart zabbix-agent] ***************************************************************************************************************************************************************
changed: [zabbix-client.example.com]
PLAY RECAP **********************************************************************************************************************************************************************************************************
zabbix-client.example.com : ok=3 changed=2 unreachable=0 failed=0
```
Whereas when i create my own Host groups (test, test2) on the Zabbix Server the task does not change anything on the second run:
```yaml
- name: register zabbix host on the server
vars:
zabbix_agent_login_user: "admin"
zabbix_agent_login_password: "1234"
delegate_to: zabbix-server.example.com
zabbix_host:
server_url: "http://{{ zabbix_agent_server }}/"
login_user: "{{ zabbix_agent_login_user }}"
login_password: "{{ zabbix_agent_login_password }}"
host_name: "{{ ansible_fqdn }}"
interfaces:
- type: 1 #agent
main: 1
useip: 1
port: 10050
ip: "{{ ansible_default_ipv4['address'] }}"
dns: "{{ ansible_fqdn }}"
visible_name: "{{ ansible_fqdn }}"
status: enabled
state: present
host_groups:
- test
- test2
link_templates:
- Template OS Linux
- Template App SSH Service
- Template Module ICMP Ping
notify: restart zabbix-agent
```
```
TASK [Gathering Facts] **********************************************************************************************************************************************************************************************
ok: [zabbix-client.example.com]
TASK [zabbix_agent : register zabbix host on the server] ************************************************************************************************************************************************************
ok: [zabbix-client.example.com -> zabbix-server.example.com]
PLAY RECAP **********************************************************************************************************************************************************************************************************
zabbix-client.example.com : ok=2 changed=0 unreachable=0 failed=0
```
##### EXPECTED RESULTS
The Module should be idempotent when adding a Host to already existing (or internal) Host Groups of the Zabbix Server and not keep changing.
##### ACTUAL RESULTS
When already existing (internal) Host Groups like "Linux Servers" or "Hypervisors" are used the Task keeps changing on every run.
|
https://github.com/ansible/ansible/issues/58981
|
https://github.com/ansible/ansible/pull/63860
|
2fa8f9cfd80daf32c7d222190edf7cfc7234582a
|
82c63c0ac3d910a26a3eb1038c15658254d80281
| 2019-07-11T13:34:53Z |
python
| 2019-12-25T15:30:51Z |
lib/ansible/modules/monitoring/zabbix/zabbix_host.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013-2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: zabbix_host
short_description: Create/update/delete Zabbix hosts
description:
- This module allows you to create, modify and delete Zabbix host entries and associated group and template data.
version_added: "2.0"
author:
- "Cove (@cove)"
- Tony Minfei Ding (!UNKNOWN)
- Harrison Gu (@harrisongu)
- Werner Dijkerman (@dj-wasabi)
- Eike Frost (@eikef)
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.4"
options:
host_name:
description:
- Name of the host in Zabbix.
- I(host_name) is the unique identifier used and cannot be updated using this module.
required: true
visible_name:
description:
- Visible name of the host in Zabbix.
version_added: '2.3'
description:
description:
- Description of the host in Zabbix.
version_added: '2.5'
host_groups:
description:
- List of host groups the host is part of.
link_templates:
description:
- List of templates linked to the host.
inventory_mode:
description:
- Configure the inventory mode.
choices: ['automatic', 'manual', 'disabled']
version_added: '2.1'
inventory_zabbix:
description:
- Add Facts for a zabbix inventory (e.g. Tag) (see example below).
- Please review the interface documentation for more information on the supported properties
- U(https://www.zabbix.com/documentation/3.2/manual/api/reference/host/object#host_inventory)
version_added: '2.5'
status:
description:
- Monitoring status of the host.
choices: ['enabled', 'disabled']
default: 'enabled'
state:
description:
- State of the host.
- On C(present), it will create if host does not exist or update the host if the associated data is different.
- On C(absent) will remove a host if it exists.
choices: ['present', 'absent']
default: 'present'
proxy:
description:
- The name of the Zabbix proxy to be used.
interfaces:
type: list
description:
- List of interfaces to be created for the host (see example below).
- For more information, review host interface documentation at
- U(https://www.zabbix.com/documentation/4.0/manual/api/reference/hostinterface/object)
suboptions:
type:
description:
- Interface type to add
- Numerical values are also accepted for interface type
- 1 = agent
- 2 = snmp
- 3 = ipmi
- 4 = jmx
choices: ['agent', 'snmp', 'ipmi', 'jmx']
required: true
main:
type: int
description:
- Whether the interface is used as default.
- If multiple interfaces with the same type are provided, only one can be default.
- 0 (not default), 1 (default)
default: 0
choices: [0, 1]
useip:
type: int
description:
- Connect to host interface with IP address instead of DNS name.
- 0 (don't use ip), 1 (use ip)
default: 0
choices: [0, 1]
ip:
type: str
description:
- IP address used by host interface.
- Required if I(useip=1).
default: ''
dns:
type: str
description:
- DNS name of the host interface.
- Required if I(useip=0).
default: ''
port:
type: str
description:
- Port used by host interface.
- If not specified, default port for each type of interface is used
- 10050 if I(type='agent')
- 161 if I(type='snmp')
- 623 if I(type='ipmi')
- 12345 if I(type='jmx')
bulk:
type: int
description:
- Whether to use bulk SNMP requests.
- 0 (don't use bulk requests), 1 (use bulk requests)
choices: [0, 1]
default: 1
default: []
tls_connect:
description:
- Specifies what encryption to use for outgoing connections.
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
tls_accept:
description:
- Specifies what types of connections are allowed for incoming connections.
- The tls_accept parameter accepts values of 1 to 7
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Values can be combined.
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
tls_psk_identity:
description:
- It is a unique name by which this specific PSK is referred to by Zabbix components
- Do not put sensitive information in the PSK identity string, it is transmitted over the network unencrypted.
- Works only with >= Zabbix 3.0
version_added: '2.5'
tls_psk:
description:
- PSK value is a hard to guess string of hexadecimal digits.
- The preshared key, at least 32 hex digits. Required if either I(tls_connect) or I(tls_accept) has PSK enabled.
- Works only with >= Zabbix 3.0
version_added: '2.5'
ca_cert:
description:
- Required certificate issuer.
- Works only with >= Zabbix 3.0
version_added: '2.5'
aliases: [ tls_issuer ]
tls_subject:
description:
- Required certificate subject.
- Works only with >= Zabbix 3.0
version_added: '2.5'
ipmi_authtype:
description:
- IPMI authentication algorithm.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are, C(0) (none), C(1) (MD2), C(2) (MD5), C(4) (straight), C(5) (OEM), C(6) (RMCP+),
with -1 being the API default.
- Please note that the Zabbix API will treat absent settings as default when updating
any of the I(ipmi_)-options; this means that if you attempt to set any of the four
options individually, the rest will be reset to default values.
version_added: '2.5'
ipmi_privilege:
description:
- IPMI privilege level.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are C(1) (callback), C(2) (user), C(3) (operator), C(4) (admin), C(5) (OEM), with C(2)
being the API default.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
ipmi_username:
description:
- IPMI username.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
ipmi_password:
description:
- IPMI password.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
force:
description:
- Overwrite the host configuration, even if already present.
type: bool
default: 'yes'
version_added: '2.0'
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = '''
- name: Create a new host or update an existing host's info
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Example group1
- Example group2
link_templates:
- Example template1
- Example template2
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: "{{ your_tag }}"
alias: "{{ your_alias }}"
notes: "Special Informations: {{ your_informations | default('None') }}"
location: "{{ your_location }}"
site_rack: "{{ your_site_rack }}"
os: "{{ your_os }}"
hardware: "{{ your_hardware }}"
ipmi_authtype: 2
ipmi_privilege: 4
ipmi_username: username
ipmi_password: password
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "10050"
- type: 4
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "12345"
proxy: a.zabbix.proxy
- name: Update an existing host's TLS settings
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
host_groups:
- Example group1
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
'''
import atexit
import copy
import traceback
try:
from zabbix_api import ZabbixAPI
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class Host(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
self._zbx_api_version = zbx.api_version()[:5]
# exist host
def is_host_exist(self, host_name):
result = self._zapi.host.get({'filter': {'host': host_name}})
return result
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" % group_name)
return True
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get({'output': 'extend', 'filter': {'host': template}})
if len(template_list) < 1:
self._module.fail_json(msg="Template not found: %s" % template)
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_host(self, host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'host': host_name, 'interfaces': interfaces, 'groups': group_ids, 'status': status,
'tls_connect': tls_connect, 'tls_accept': tls_accept}
if proxy_id:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity is not None:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
parameters['tls_psk'] = tls_psk
if tls_issuer is not None:
parameters['tls_issuer'] = tls_issuer
if tls_subject is not None:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype is not None:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege is not None:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username is not None:
parameters['ipmi_username'] = ipmi_username
if ipmi_password is not None:
parameters['ipmi_password'] = ipmi_password
host_list = self._zapi.host.create(parameters)
if len(host_list) >= 1:
return host_list['hostids'][0]
except Exception as e:
self._module.fail_json(msg="Failed to create host %s: %s" % (host_name, e))
def update_host(self, host_name, group_ids, status, host_id, interfaces, exist_interface_list, proxy_id,
visible_name, description, tls_connect, tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype,
ipmi_privilege, ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'hostid': host_id, 'groups': group_ids, 'status': status, 'tls_connect': tls_connect,
'tls_accept': tls_accept}
if proxy_id >= 0:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk:
parameters['tls_psk'] = tls_psk
if tls_issuer:
parameters['tls_issuer'] = tls_issuer
if tls_subject:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username:
parameters['ipmi_username'] = ipmi_username
if ipmi_password:
parameters['ipmi_password'] = ipmi_password
self._zapi.host.update(parameters)
interface_list_copy = exist_interface_list
if interfaces:
for interface in interfaces:
flag = False
interface_str = interface
for exist_interface in exist_interface_list:
interface_type = int(interface['type'])
exist_interface_type = int(exist_interface['type'])
if interface_type == exist_interface_type:
# update
interface_str['interfaceid'] = exist_interface['interfaceid']
self._zapi.hostinterface.update(interface_str)
flag = True
interface_list_copy.remove(exist_interface)
break
if not flag:
# add
interface_str['hostid'] = host_id
self._zapi.hostinterface.create(interface_str)
# remove
remove_interface_ids = []
for remove_interface in interface_list_copy:
interface_id = remove_interface['interfaceid']
remove_interface_ids.append(interface_id)
if len(remove_interface_ids) > 0:
self._zapi.hostinterface.delete(remove_interface_ids)
except Exception as e:
self._module.fail_json(msg="Failed to update host %s: %s" % (host_name, e))
def delete_host(self, host_id, host_name):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.delete([host_id])
except Exception as e:
self._module.fail_json(msg="Failed to delete host %s: %s" % (host_name, e))
# get host by host name
def get_host_by_host_name(self, host_name):
host_list = self._zapi.host.get({'output': 'extend', 'selectInventory': 'extend', 'filter': {'host': [host_name]}})
if len(host_list) < 1:
self._module.fail_json(msg="Host not found: %s" % host_name)
else:
return host_list[0]
# get proxyid by proxy name
def get_proxyid_by_proxy_name(self, proxy_name):
proxy_list = self._zapi.proxy.get({'output': 'extend', 'filter': {'host': [proxy_name]}})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return int(proxy_list[0]['proxyid'])
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
group_ids = []
if self.check_host_group_exist(group_names):
group_list = self._zapi.hostgroup.get({'output': 'extend', 'filter': {'name': group_names}})
for group in group_list:
group_id = group['groupid']
group_ids.append({'groupid': group_id})
return group_ids
# get host templates by host id
def get_host_templates_by_host_id(self, host_id):
template_ids = []
template_list = self._zapi.template.get({'output': 'extend', 'hostids': host_id})
for template in template_list:
template_ids.append(template['templateid'])
return template_ids
# get host groups by host id
def get_host_groups_by_host_id(self, host_id):
exist_host_groups = []
host_groups_list = self._zapi.hostgroup.get({'output': 'extend', 'hostids': host_id})
if len(host_groups_list) >= 1:
for host_groups_name in host_groups_list:
exist_host_groups.append(host_groups_name['name'])
return exist_host_groups
# check the exist_interfaces whether it equals the interfaces or not
def check_interface_properties(self, exist_interface_list, interfaces):
interfaces_port_list = []
if interfaces is not None:
if len(interfaces) >= 1:
for interface in interfaces:
interfaces_port_list.append(str(interface['port']))
exist_interface_ports = []
if len(exist_interface_list) >= 1:
for exist_interface in exist_interface_list:
exist_interface_ports.append(str(exist_interface['port']))
if set(interfaces_port_list) != set(exist_interface_ports):
return True
for exist_interface in exist_interface_list:
exit_interface_port = str(exist_interface['port'])
for interface in interfaces:
interface_port = str(interface['port'])
if interface_port == exit_interface_port:
for key in interface.keys():
if str(exist_interface[key]) != str(interface[key]):
return True
return False
# get the status of host by host
def get_host_status_by_host(self, host):
return host['status']
# check all the properties before link or clear template
def check_all_properties(self, host_id, host_groups, status, interfaces, template_ids,
exist_interfaces, host, proxy_id, visible_name, description, host_name,
inventory_mode, inventory_zabbix, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, tls_connect, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
# get the existing host's groups
exist_host_groups = self.get_host_groups_by_host_id(host_id)
if set(host_groups) != set(exist_host_groups):
return True
# get the existing status
exist_status = self.get_host_status_by_host(host)
if int(status) != int(exist_status):
return True
# check the exist_interfaces whether it equals the interfaces or not
if self.check_interface_properties(exist_interfaces, interfaces):
return True
# get the existing templates
exist_template_ids = self.get_host_templates_by_host_id(host_id)
if set(list(template_ids)) != set(exist_template_ids):
return True
if int(host['proxy_hostid']) != int(proxy_id):
return True
# Check whether the visible_name has changed; Zabbix defaults to the technical hostname if not set.
if visible_name:
if host['name'] != visible_name:
return True
# Only compare description if it is given as a module parameter
if description:
if host['description'] != description:
return True
if inventory_mode:
if LooseVersion(self._zbx_api_version) <= LooseVersion('4.4.0'):
if host['inventory']:
if int(host['inventory']['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
elif inventory_mode != 'disabled':
return True
else:
if int(host['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
if inventory_zabbix:
proposed_inventory = copy.deepcopy(host['inventory'])
proposed_inventory.update(inventory_zabbix)
if proposed_inventory != host['inventory']:
return True
if tls_accept is not None and 'tls_accept' in host:
if int(host['tls_accept']) != tls_accept:
return True
if tls_psk_identity is not None and 'tls_psk_identity' in host:
if host['tls_psk_identity'] != tls_psk_identity:
return True
if tls_psk is not None and 'tls_psk' in host:
if host['tls_psk'] != tls_psk:
return True
if tls_issuer is not None and 'tls_issuer' in host:
if host['tls_issuer'] != tls_issuer:
return True
if tls_subject is not None and 'tls_subject' in host:
if host['tls_subject'] != tls_subject:
return True
if tls_connect is not None and 'tls_connect' in host:
if int(host['tls_connect']) != tls_connect:
return True
if ipmi_authtype is not None:
if int(host['ipmi_authtype']) != ipmi_authtype:
return True
if ipmi_privilege is not None:
if int(host['ipmi_privilege']) != ipmi_privilege:
return True
if ipmi_username is not None:
if host['ipmi_username'] != ipmi_username:
return True
if ipmi_password is not None:
if host['ipmi_password'] != ipmi_password:
return True
return False
# link or clear template of the host
def link_or_clear_template(self, host_id, template_id_list, tls_connect, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
# get host's exist template ids
exist_template_id_list = self.get_host_templates_by_host_id(host_id)
exist_template_ids = set(exist_template_id_list)
template_ids = set(template_id_list)
template_id_list = list(template_ids)
# get unlink and clear templates
templates_clear = exist_template_ids.difference(template_ids)
templates_clear_list = list(templates_clear)
request_str = {'hostid': host_id, 'templates': template_id_list, 'templates_clear': templates_clear_list,
'tls_connect': tls_connect, 'tls_accept': tls_accept, 'ipmi_authtype': ipmi_authtype,
'ipmi_privilege': ipmi_privilege, 'ipmi_username': ipmi_username, 'ipmi_password': ipmi_password}
if tls_psk_identity is not None:
request_str['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
request_str['tls_psk'] = tls_psk
if tls_issuer is not None:
request_str['tls_issuer'] = tls_issuer
if tls_subject is not None:
request_str['tls_subject'] = tls_subject
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to link template to host: %s" % e)
def inventory_mode_numeric(self, inventory_mode):
if inventory_mode == "automatic":
return int(1)
elif inventory_mode == "manual":
return int(0)
elif inventory_mode == "disabled":
return int(-1)
return inventory_mode
# Update the host inventory_mode
def update_inventory_mode(self, host_id, inventory_mode):
# nothing was set, do nothing
if not inventory_mode:
return
inventory_mode = self.inventory_mode_numeric(inventory_mode)
# watch for - https://support.zabbix.com/browse/ZBX-6033
request_str = {'hostid': host_id, 'inventory_mode': inventory_mode}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory_mode to host: %s" % e)
def update_inventory_zabbix(self, host_id, inventory):
if not inventory:
return
request_str = {'hostid': host_id, 'inventory': inventory}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory to host: %s" % e)
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
host_name=dict(type='str', required=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
host_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
status=dict(default="enabled", choices=['enabled', 'disabled']),
state=dict(default="present", choices=['present', 'absent']),
inventory_mode=dict(required=False, choices=['automatic', 'manual', 'disabled']),
ipmi_authtype=dict(type='int', default=None),
ipmi_privilege=dict(type='int', default=None),
ipmi_username=dict(type='str', required=False, default=None),
ipmi_password=dict(type='str', required=False, default=None, no_log=True),
tls_connect=dict(type='int', default=1),
tls_accept=dict(type='int', default=1),
tls_psk_identity=dict(type='str', required=False),
tls_psk=dict(type='str', required=False),
ca_cert=dict(type='str', required=False, aliases=['tls_issuer']),
tls_subject=dict(type='str', required=False),
inventory_zabbix=dict(required=False, type='dict'),
timeout=dict(type='int', default=10),
interfaces=dict(type='list', required=False),
force=dict(type='bool', default=True),
proxy=dict(type='str', required=False),
visible_name=dict(type='str', required=False),
description=dict(type='str', required=False)
),
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
host_name = module.params['host_name']
visible_name = module.params['visible_name']
description = module.params['description']
host_groups = module.params['host_groups']
link_templates = module.params['link_templates']
inventory_mode = module.params['inventory_mode']
ipmi_authtype = module.params['ipmi_authtype']
ipmi_privilege = module.params['ipmi_privilege']
ipmi_username = module.params['ipmi_username']
ipmi_password = module.params['ipmi_password']
tls_connect = module.params['tls_connect']
tls_accept = module.params['tls_accept']
tls_psk_identity = module.params['tls_psk_identity']
tls_psk = module.params['tls_psk']
tls_issuer = module.params['ca_cert']
tls_subject = module.params['tls_subject']
inventory_zabbix = module.params['inventory_zabbix']
status = module.params['status']
state = module.params['state']
timeout = module.params['timeout']
interfaces = module.params['interfaces']
force = module.params['force']
proxy = module.params['proxy']
# convert enabled to 0; disabled to 1
status = 1 if status == "disabled" else 0
zbx = None
# login to zabbix
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except Exception as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
host = Host(module, zbx)
template_ids = []
if link_templates:
template_ids = host.get_template_ids(link_templates)
group_ids = []
if host_groups:
group_ids = host.get_group_ids_by_group_names(host_groups)
ip = ""
if interfaces:
# ensure interfaces are well-formed
for interface in interfaces:
if 'type' not in interface:
module.fail_json(msg="(interface) type needs to be specified for interface '%s'." % interface)
interfacetypes = {'agent': 1, 'snmp': 2, 'ipmi': 3, 'jmx': 4}
if interface['type'] in interfacetypes.keys():
interface['type'] = interfacetypes[interface['type']]
if interface['type'] < 1 or interface['type'] > 4:
module.fail_json(msg="Interface type can only be 1-4 for interface '%s'." % interface)
if 'useip' not in interface:
interface['useip'] = 0
if 'dns' not in interface:
if interface['useip'] == 0:
module.fail_json(msg="dns needs to be set if useip is 0 on interface '%s'." % interface)
interface['dns'] = ''
if 'ip' not in interface:
if interface['useip'] == 1:
module.fail_json(msg="ip needs to be set if useip is 1 on interface '%s'." % interface)
interface['ip'] = ''
if 'main' not in interface:
interface['main'] = 0
if 'port' in interface and not isinstance(interface['port'], str):
try:
interface['port'] = str(interface['port'])
except ValueError:
module.fail_json(msg="port should be convertable to string on interface '%s'." % interface)
if 'port' not in interface:
if interface['type'] == 1:
interface['port'] = "10050"
elif interface['type'] == 2:
interface['port'] = "161"
elif interface['type'] == 3:
interface['port'] = "623"
elif interface['type'] == 4:
interface['port'] = "12345"
if interface['type'] == 1:
ip = interface['ip']
# Use proxy specified, or set to 0
if proxy:
proxy_id = host.get_proxyid_by_proxy_name(proxy)
else:
proxy_id = 0
# check if host exist
is_host_exist = host.is_host_exist(host_name)
if is_host_exist:
# get host id by host name
zabbix_host_obj = host.get_host_by_host_name(host_name)
host_id = zabbix_host_obj['hostid']
# If proxy is not specified as a module parameter, use the existing setting
if proxy is None:
proxy_id = int(zabbix_host_obj['proxy_hostid'])
if state == "absent":
# remove host
host.delete_host(host_id, host_name)
module.exit_json(changed=True, result="Successfully delete host %s" % host_name)
else:
if not host_groups:
# if host_groups have not been specified when updating an existing host, just
# get the group_ids from the existing host without updating them.
host_groups = host.get_host_groups_by_host_id(host_id)
group_ids = host.get_group_ids_by_group_names(host_groups)
# get existing host's interfaces
exist_interfaces = host._zapi.hostinterface.get({'output': 'extend', 'hostids': host_id})
# if no interfaces were specified with the module, start with an empty list
if not interfaces:
interfaces = []
# When force=no is specified, append existing interfaces to interfaces to update. When
# no interfaces have been specified, copy existing interfaces as specified from the API.
# Do the same with templates and host groups.
if not force or not interfaces:
for interface in copy.deepcopy(exist_interfaces):
# remove values not used during hostinterface.add/update calls
for key in tuple(interface.keys()):
if key in ['interfaceid', 'hostid', 'bulk']:
interface.pop(key, None)
for index in interface.keys():
if index in ['useip', 'main', 'type']:
interface[index] = int(interface[index])
if interface not in interfaces:
interfaces.append(interface)
if not force or link_templates is None:
template_ids = list(set(template_ids + host.get_host_templates_by_host_id(host_id)))
if not force:
for group_id in host.get_group_ids_by_group_names(host.get_host_groups_by_host_id(host_id)):
if group_id not in group_ids:
group_ids.append(group_id)
# update host
if host.check_all_properties(host_id, host_groups, status, interfaces, template_ids,
exist_interfaces, zabbix_host_obj, proxy_id, visible_name,
description, host_name, inventory_mode, inventory_zabbix,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, tls_connect,
ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
host.update_host(host_name, group_ids, status, host_id,
interfaces, exist_interfaces, proxy_id, visible_name, description, tls_connect, tls_accept,
tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True,
result="Successfully update host %s (%s) and linked with template '%s'"
% (host_name, ip, link_templates))
else:
module.exit_json(changed=False)
else:
if state == "absent":
# the host is already deleted.
module.exit_json(changed=False)
if not group_ids:
module.fail_json(msg="Specify at least one group for creating host '%s'." % host_name)
if not interfaces or (interfaces and len(interfaces) == 0):
module.fail_json(msg="Specify at least one interface for creating host '%s'." % host_name)
# create host
host_id = host.add_host(host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True, result="Successfully added host %s (%s) and linked with template '%s'" % (
host_name, ip, link_templates))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,352 |
Homebrew module's some functions ignore check_mode option (with Destructive change)
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
- In the `homebrew` module, following functions ignore check_mode option.
- Even if check_mode is enabled, Homebrew and Formulae are actually updated.
- update_homebrew
- upgrade_all
- I'm working to create PR.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Homebrew module
`ansible/modules/packaging/os/homebrew.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/Users/Saori_/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
$
# no changed
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G87
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
`$ ansible-playbook -C playbook.yml -vvv `
```yaml
- hosts: localhost
tasks:
- name: Update homebrew
homebrew:
update_homebrew: yes
check_mode: true
- name: Upgrade homebrew
homebrew:
upgrade_all: yes
check_mode: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It wouldn't actually upgrade.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
brew config(before playbook run)
```paste below
HOMEBREW_VERSION: 2.1.16
ORIGIN: https://github.com/Homebrew/brew
HEAD: 3aa7624284c43180a3f3a71aeaa9263092868e12
Last commit: 3 weeks ago
Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 7e82bc5693da5d96d5f1ceb9e03265d3f7c5c303
Core tap last commit: 2 weeks ago
HOMEBREW_PREFIX: /usr/local
CPU: quad-core 64-bit broadwell
Homebrew Ruby: 2.6.3 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/bin/ruby
Clang: 10.0 build 1001
Git: 2.24.0 => /usr/local/bin/git
Curl: 7.54.0 => /usr/bin/curl
Java: 11.0.4
macOS: 10.14.6-x86_64
CLT: 10.3.0.0.1.1562985497
Xcode: 10.3
CLT headers: 10.3.0.0.1.1562985497
```
brew config(after playbook run)
```
HOMEBREW_VERSION: 2.2.0
ORIGIN: https://github.com/Homebrew/brew
HEAD: 7d7de295dfbc5e581106e2b1f674496b5e25a773
Last commit: 28 hours ago
Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 808d309b0b6d22a55ea0b27f8923d264119ab3b7
Core tap last commit: 7 hours ago
HOMEBREW_PREFIX: /usr/local
CPU: quad-core 64-bit broadwell
Homebrew Ruby: 2.6.3 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/bin/ruby
Clang: 10.0 build 1001
Git: 2.24.0 => /usr/local/bin/git
Curl: 7.54.0 => /usr/bin/curl
Java: 11.0.4
macOS: 10.14.6-x86_64
CLT: 10.3.0.0.1.1562985497
Xcode: 10.3
CLT headers: 10.3.0.0.1.1562985497
```
brew info Formula(ex. tflint) (before playbook run)
```
tflint: stable 0.13.1 (bottled), HEAD
Linter for Terraform files
https://github.com/wata727/tflint
/usr/local/Cellar/tflint/0.12.1 (6 files, 51.3MB) *
Poured from bottle on 2019-11-09 at 15:10:31
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/tflint.rb
==> Dependencies
Build: go
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 1,935 (30 days), 6,475 (90 days), 9,307 (365 days)
install-on-request: 1,934 (30 days), 6,469 (90 days), 9,295 (365 days)
build-error: 0 (30 days)
```
brew info Formula(ex. tflint) (after playbook run)
```
tflint: stable 0.13.1 (bottled), HEAD
Linter for Terraform files
https://github.com/wata727/tflint
/usr/local/Cellar/tflint/0.13.1 (6 files, 51.9MB) *
Poured from bottle on 2019-11-28 at 23:13:47
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/tflint.rb
==> Dependencies
Build: go
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 1,935 (30 days), 6,475 (90 days), 9,307 (365 days)
install-on-request: 1,934 (30 days), 6,469 (90 days), 9,295 (365 days)
build-error: 0 (30 days)
```
Playbook Log(Update)
```
TASK [Update homebrew] ************************************************************************
task path: /Users/Saori_/Works/ansible-contribute-test/test.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: Saori_
<127.0.0.1> EXEC /bin/sh -c 'echo ~Saori_ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964 `" && echo ansible-tmp-1574949662.366273-222861329502964="` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964 `" ) && sleep 0'
Using module file /usr/local/Cellar/ansible/2.9.0/libexec/lib/python3.7/site-packages/ansible/modules/packaging/os/homebrew.py
<127.0.0.1> PUT /Users/Saori_/.ansible/tmp/ansible-local-44973hpyrkb_6/tmpisalekvv TO /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/AnsiballZ_homebrew.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/ /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.9.0/libexec/bin/python3.7 /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"install_options": [],
"name": null,
"path": "/usr/local/bin",
"state": "present",
"update_homebrew": true,
"upgrade_all": false
}
},
"msg": "Homebrew updated successfully."
}
META: ran handlers
META: ran handlers
```
Playbook Log(Upgrade)
```
TASK [Upgrade homebrew] *****************************************************************************************************************
task path: /Users/Saori_/Works/ansible-contribute-test/test.yml:8
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: Saori_
<127.0.0.1> EXEC /bin/sh -c 'echo ~Saori_ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960 `" && echo ansible-tmp-1574950346.5928059-177043868421960="` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960 `" ) && sleep 0'
Using module file /usr/local/Cellar/ansible/2.9.0/libexec/lib/python3.7/site-packages/ansible/modules/packaging/os/homebrew.py
<127.0.0.1> PUT /Users/Saori_/.ansible/tmp/ansible-local-57920gqeg5sin/tmp1t0wyev0 TO /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/AnsiballZ_homebrew.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/ /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.9.0/libexec/bin/python3.7 /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"install_options": [],
"name": null,
"path": "/usr/local/bin",
"state": "present",
"update_homebrew": false,
"upgrade_all": true
}
},
"msg": "Homebrew upgraded."
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65352
|
https://github.com/ansible/ansible/pull/65387
|
4d1a57453e2bd1134d8bf075e21705d60b9f58ab
|
c35a7b88d4dc04b86b2462b2be231b6a39649714
| 2019-11-28T14:43:23Z |
python
| 2019-12-27T05:15:34Z |
changelogs/fragments/65387-homebrew_check_mode_option.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,352 |
Homebrew module's some functions ignore check_mode option (with Destructive change)
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
- In the `homebrew` module, following functions ignore check_mode option.
- Even if check_mode is enabled, Homebrew and Formulae are actually updated.
- update_homebrew
- upgrade_all
- I'm working to create PR.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Homebrew module
`ansible/modules/packaging/os/homebrew.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/Users/Saori_/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
$
# no changed
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G87
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
`$ ansible-playbook -C playbook.yml -vvv `
```yaml
- hosts: localhost
tasks:
- name: Update homebrew
homebrew:
update_homebrew: yes
check_mode: true
- name: Upgrade homebrew
homebrew:
upgrade_all: yes
check_mode: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It wouldn't actually upgrade.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
brew config(before playbook run)
```paste below
HOMEBREW_VERSION: 2.1.16
ORIGIN: https://github.com/Homebrew/brew
HEAD: 3aa7624284c43180a3f3a71aeaa9263092868e12
Last commit: 3 weeks ago
Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 7e82bc5693da5d96d5f1ceb9e03265d3f7c5c303
Core tap last commit: 2 weeks ago
HOMEBREW_PREFIX: /usr/local
CPU: quad-core 64-bit broadwell
Homebrew Ruby: 2.6.3 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/bin/ruby
Clang: 10.0 build 1001
Git: 2.24.0 => /usr/local/bin/git
Curl: 7.54.0 => /usr/bin/curl
Java: 11.0.4
macOS: 10.14.6-x86_64
CLT: 10.3.0.0.1.1562985497
Xcode: 10.3
CLT headers: 10.3.0.0.1.1562985497
```
brew config(after playbook run)
```
HOMEBREW_VERSION: 2.2.0
ORIGIN: https://github.com/Homebrew/brew
HEAD: 7d7de295dfbc5e581106e2b1f674496b5e25a773
Last commit: 28 hours ago
Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 808d309b0b6d22a55ea0b27f8923d264119ab3b7
Core tap last commit: 7 hours ago
HOMEBREW_PREFIX: /usr/local
CPU: quad-core 64-bit broadwell
Homebrew Ruby: 2.6.3 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/bin/ruby
Clang: 10.0 build 1001
Git: 2.24.0 => /usr/local/bin/git
Curl: 7.54.0 => /usr/bin/curl
Java: 11.0.4
macOS: 10.14.6-x86_64
CLT: 10.3.0.0.1.1562985497
Xcode: 10.3
CLT headers: 10.3.0.0.1.1562985497
```
brew info Formula(ex. tflint) (before playbook run)
```
tflint: stable 0.13.1 (bottled), HEAD
Linter for Terraform files
https://github.com/wata727/tflint
/usr/local/Cellar/tflint/0.12.1 (6 files, 51.3MB) *
Poured from bottle on 2019-11-09 at 15:10:31
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/tflint.rb
==> Dependencies
Build: go
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 1,935 (30 days), 6,475 (90 days), 9,307 (365 days)
install-on-request: 1,934 (30 days), 6,469 (90 days), 9,295 (365 days)
build-error: 0 (30 days)
```
brew info Formula(ex. tflint) (after playbook run)
```
tflint: stable 0.13.1 (bottled), HEAD
Linter for Terraform files
https://github.com/wata727/tflint
/usr/local/Cellar/tflint/0.13.1 (6 files, 51.9MB) *
Poured from bottle on 2019-11-28 at 23:13:47
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/tflint.rb
==> Dependencies
Build: go
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 1,935 (30 days), 6,475 (90 days), 9,307 (365 days)
install-on-request: 1,934 (30 days), 6,469 (90 days), 9,295 (365 days)
build-error: 0 (30 days)
```
Playbook Log(Update)
```
TASK [Update homebrew] ************************************************************************
task path: /Users/Saori_/Works/ansible-contribute-test/test.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: Saori_
<127.0.0.1> EXEC /bin/sh -c 'echo ~Saori_ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964 `" && echo ansible-tmp-1574949662.366273-222861329502964="` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964 `" ) && sleep 0'
Using module file /usr/local/Cellar/ansible/2.9.0/libexec/lib/python3.7/site-packages/ansible/modules/packaging/os/homebrew.py
<127.0.0.1> PUT /Users/Saori_/.ansible/tmp/ansible-local-44973hpyrkb_6/tmpisalekvv TO /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/AnsiballZ_homebrew.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/ /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.9.0/libexec/bin/python3.7 /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/Saori_/.ansible/tmp/ansible-tmp-1574949662.366273-222861329502964/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"install_options": [],
"name": null,
"path": "/usr/local/bin",
"state": "present",
"update_homebrew": true,
"upgrade_all": false
}
},
"msg": "Homebrew updated successfully."
}
META: ran handlers
META: ran handlers
```
Playbook Log(Upgrade)
```
TASK [Upgrade homebrew] *****************************************************************************************************************
task path: /Users/Saori_/Works/ansible-contribute-test/test.yml:8
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: Saori_
<127.0.0.1> EXEC /bin/sh -c 'echo ~Saori_ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960 `" && echo ansible-tmp-1574950346.5928059-177043868421960="` echo /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960 `" ) && sleep 0'
Using module file /usr/local/Cellar/ansible/2.9.0/libexec/lib/python3.7/site-packages/ansible/modules/packaging/os/homebrew.py
<127.0.0.1> PUT /Users/Saori_/.ansible/tmp/ansible-local-57920gqeg5sin/tmp1t0wyev0 TO /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/AnsiballZ_homebrew.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/ /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.9.0/libexec/bin/python3.7 /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/AnsiballZ_homebrew.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/Saori_/.ansible/tmp/ansible-tmp-1574950346.5928059-177043868421960/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"install_options": [],
"name": null,
"path": "/usr/local/bin",
"state": "present",
"update_homebrew": false,
"upgrade_all": true
}
},
"msg": "Homebrew upgraded."
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65352
|
https://github.com/ansible/ansible/pull/65387
|
4d1a57453e2bd1134d8bf075e21705d60b9f58ab
|
c35a7b88d4dc04b86b2462b2be231b6a39649714
| 2019-11-28T14:43:23Z |
python
| 2019-12-27T05:15:34Z |
lib/ansible/modules/packaging/os/homebrew.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013, Andrew Dunham <[email protected]>
# (c) 2013, Daniel Jaouen <[email protected]>
# (c) 2015, Indrajit Raychaudhuri <[email protected]>
#
# Based on macports (Jimmy Tang <[email protected]>)
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: homebrew
author:
- "Indrajit Raychaudhuri (@indrajitr)"
- "Daniel Jaouen (@danieljaouen)"
- "Andrew Dunham (@andrew-d)"
requirements:
- "python >= 2.6"
- homebrew must already be installed on the target system
short_description: Package manager for Homebrew
description:
- Manages Homebrew packages
version_added: "1.1"
options:
name:
description:
- list of names of packages to install/remove
aliases: ['pkg', 'package', 'formula']
type: list
elements: str
path:
description:
- "A ':' separated list of paths to search for 'brew' executable.
Since a package (I(formula) in homebrew parlance) location is prefixed relative to the actual path of I(brew) command,
providing an alternative I(brew) path enables managing different set of packages in an alternative location in the system."
default: '/usr/local/bin'
state:
description:
- state of the package
choices: [ 'head', 'latest', 'present', 'absent', 'linked', 'unlinked' ]
default: present
update_homebrew:
description:
- update homebrew itself first
type: bool
default: 'no'
aliases: ['update-brew']
upgrade_all:
description:
- upgrade all homebrew packages
type: bool
default: 'no'
aliases: ['upgrade']
install_options:
description:
- options flags to install a package
aliases: ['options']
version_added: "1.4"
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
'''
EXAMPLES = '''
# Install formula foo with 'brew' in default path (C(/usr/local/bin))
- homebrew:
name: foo
state: present
# Install formula foo with 'brew' in alternate path C(/my/other/location/bin)
- homebrew:
name: foo
path: /my/other/location/bin
state: present
# Update homebrew first and install formula foo with 'brew' in default path
- homebrew:
name: foo
state: present
update_homebrew: yes
# Update homebrew first and upgrade formula foo to latest available with 'brew' in default path
- homebrew:
name: foo
state: latest
update_homebrew: yes
# Update homebrew and upgrade all packages
- homebrew:
update_homebrew: yes
upgrade_all: yes
# Miscellaneous other examples
- homebrew:
name: foo
state: head
- homebrew:
name: foo
state: linked
- homebrew:
name: foo
state: absent
- homebrew:
name: foo,bar
state: absent
- homebrew:
name: foo
state: present
install_options: with-baz,enable-debug
'''
import os.path
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems, string_types
# exceptions -------------------------------------------------------------- {{{
class HomebrewException(Exception):
pass
# /exceptions ------------------------------------------------------------- }}}
# utils ------------------------------------------------------------------- {{{
def _create_regex_group(s):
lines = (line.strip() for line in s.split('\n') if line.strip())
chars = filter(None, (line.split('#')[0].strip() for line in lines))
group = r'[^' + r''.join(chars) + r']'
return re.compile(group)
# /utils ------------------------------------------------------------------ }}}
class Homebrew(object):
'''A class to manage Homebrew packages.'''
# class regexes ------------------------------------------------ {{{
VALID_PATH_CHARS = r'''
\w # alphanumeric characters (i.e., [a-zA-Z0-9_])
\s # spaces
: # colons
{sep} # the OS-specific path separator
. # dots
- # dashes
'''.format(sep=os.path.sep)
VALID_BREW_PATH_CHARS = r'''
\w # alphanumeric characters (i.e., [a-zA-Z0-9_])
\s # spaces
{sep} # the OS-specific path separator
. # dots
- # dashes
'''.format(sep=os.path.sep)
VALID_PACKAGE_CHARS = r'''
\w # alphanumeric characters (i.e., [a-zA-Z0-9_])
. # dots
/ # slash (for taps)
\+ # plusses
- # dashes
: # colons (for URLs)
@ # at-sign
'''
INVALID_PATH_REGEX = _create_regex_group(VALID_PATH_CHARS)
INVALID_BREW_PATH_REGEX = _create_regex_group(VALID_BREW_PATH_CHARS)
INVALID_PACKAGE_REGEX = _create_regex_group(VALID_PACKAGE_CHARS)
# /class regexes ----------------------------------------------- }}}
# class validations -------------------------------------------- {{{
@classmethod
def valid_path(cls, path):
'''
`path` must be one of:
- list of paths
- a string containing only:
- alphanumeric characters
- dashes
- dots
- spaces
- colons
- os.path.sep
'''
if isinstance(path, string_types):
return not cls.INVALID_PATH_REGEX.search(path)
try:
iter(path)
except TypeError:
return False
else:
paths = path
return all(cls.valid_brew_path(path_) for path_ in paths)
@classmethod
def valid_brew_path(cls, brew_path):
'''
`brew_path` must be one of:
- None
- a string containing only:
- alphanumeric characters
- dashes
- dots
- spaces
- os.path.sep
'''
if brew_path is None:
return True
return (
isinstance(brew_path, string_types)
and not cls.INVALID_BREW_PATH_REGEX.search(brew_path)
)
@classmethod
def valid_package(cls, package):
'''A valid package is either None or alphanumeric.'''
if package is None:
return True
return (
isinstance(package, string_types)
and not cls.INVALID_PACKAGE_REGEX.search(package)
)
@classmethod
def valid_state(cls, state):
'''
A valid state is one of:
- None
- installed
- upgraded
- head
- linked
- unlinked
- absent
'''
if state is None:
return True
else:
return (
isinstance(state, string_types)
and state.lower() in (
'installed',
'upgraded',
'head',
'linked',
'unlinked',
'absent',
)
)
@classmethod
def valid_module(cls, module):
'''A valid module is an instance of AnsibleModule.'''
return isinstance(module, AnsibleModule)
# /class validations ------------------------------------------- }}}
# class properties --------------------------------------------- {{{
@property
def module(self):
return self._module
@module.setter
def module(self, module):
if not self.valid_module(module):
self._module = None
self.failed = True
self.message = 'Invalid module: {0}.'.format(module)
raise HomebrewException(self.message)
else:
self._module = module
return module
@property
def path(self):
return self._path
@path.setter
def path(self, path):
if not self.valid_path(path):
self._path = []
self.failed = True
self.message = 'Invalid path: {0}.'.format(path)
raise HomebrewException(self.message)
else:
if isinstance(path, string_types):
self._path = path.split(':')
else:
self._path = path
return path
@property
def brew_path(self):
return self._brew_path
@brew_path.setter
def brew_path(self, brew_path):
if not self.valid_brew_path(brew_path):
self._brew_path = None
self.failed = True
self.message = 'Invalid brew_path: {0}.'.format(brew_path)
raise HomebrewException(self.message)
else:
self._brew_path = brew_path
return brew_path
@property
def params(self):
return self._params
@params.setter
def params(self, params):
self._params = self.module.params
return self._params
@property
def current_package(self):
return self._current_package
@current_package.setter
def current_package(self, package):
if not self.valid_package(package):
self._current_package = None
self.failed = True
self.message = 'Invalid package: {0}.'.format(package)
raise HomebrewException(self.message)
else:
self._current_package = package
return package
# /class properties -------------------------------------------- }}}
def __init__(self, module, path, packages=None, state=None,
update_homebrew=False, upgrade_all=False,
install_options=None):
if not install_options:
install_options = list()
self._setup_status_vars()
self._setup_instance_vars(module=module, path=path, packages=packages,
state=state, update_homebrew=update_homebrew,
upgrade_all=upgrade_all,
install_options=install_options, )
self._prep()
# prep --------------------------------------------------------- {{{
def _setup_status_vars(self):
self.failed = False
self.changed = False
self.changed_count = 0
self.unchanged_count = 0
self.message = ''
def _setup_instance_vars(self, **kwargs):
for key, val in iteritems(kwargs):
setattr(self, key, val)
def _prep(self):
self._prep_brew_path()
def _prep_brew_path(self):
if not self.module:
self.brew_path = None
self.failed = True
self.message = 'AnsibleModule not set.'
raise HomebrewException(self.message)
self.brew_path = self.module.get_bin_path(
'brew',
required=True,
opt_dirs=self.path,
)
if not self.brew_path:
self.brew_path = None
self.failed = True
self.message = 'Unable to locate homebrew executable.'
raise HomebrewException('Unable to locate homebrew executable.')
return self.brew_path
def _status(self):
return (self.failed, self.changed, self.message)
# /prep -------------------------------------------------------- }}}
def run(self):
try:
self._run()
except HomebrewException:
pass
if not self.failed and (self.changed_count + self.unchanged_count > 1):
self.message = "Changed: %d, Unchanged: %d" % (
self.changed_count,
self.unchanged_count,
)
(failed, changed, message) = self._status()
return (failed, changed, message)
# checks ------------------------------------------------------- {{{
def _current_package_is_installed(self):
if not self.valid_package(self.current_package):
self.failed = True
self.message = 'Invalid package: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
cmd = [
"{brew_path}".format(brew_path=self.brew_path),
"info",
self.current_package,
]
rc, out, err = self.module.run_command(cmd)
for line in out.split('\n'):
if (
re.search(r'Built from source', line)
or re.search(r'Poured from bottle', line)
):
return True
return False
def _current_package_is_outdated(self):
if not self.valid_package(self.current_package):
return False
rc, out, err = self.module.run_command([
self.brew_path,
'outdated',
self.current_package,
])
return rc != 0
def _current_package_is_installed_from_head(self):
if not Homebrew.valid_package(self.current_package):
return False
elif not self._current_package_is_installed():
return False
rc, out, err = self.module.run_command([
self.brew_path,
'info',
self.current_package,
])
try:
version_info = [line for line in out.split('\n') if line][0]
except IndexError:
return False
return version_info.split(' ')[-1] == 'HEAD'
# /checks ------------------------------------------------------ }}}
# commands ----------------------------------------------------- {{{
def _run(self):
if self.update_homebrew:
self._update_homebrew()
if self.upgrade_all:
self._upgrade_all()
if self.packages:
if self.state == 'installed':
return self._install_packages()
elif self.state == 'upgraded':
return self._upgrade_packages()
elif self.state == 'head':
return self._install_packages()
elif self.state == 'linked':
return self._link_packages()
elif self.state == 'unlinked':
return self._unlink_packages()
elif self.state == 'absent':
return self._uninstall_packages()
# updated -------------------------------- {{{
def _update_homebrew(self):
rc, out, err = self.module.run_command([
self.brew_path,
'update',
])
if rc == 0:
if out and isinstance(out, string_types):
already_updated = any(
re.search(r'Already up-to-date.', s.strip(), re.IGNORECASE)
for s in out.split('\n')
if s
)
if not already_updated:
self.changed = True
self.message = 'Homebrew updated successfully.'
else:
self.message = 'Homebrew already up-to-date.'
return True
else:
self.failed = True
self.message = err.strip()
raise HomebrewException(self.message)
# /updated ------------------------------- }}}
# _upgrade_all --------------------------- {{{
def _upgrade_all(self):
rc, out, err = self.module.run_command([
self.brew_path,
'upgrade',
])
if rc == 0:
if not out:
self.message = 'Homebrew packages already upgraded.'
else:
self.changed = True
self.message = 'Homebrew upgraded.'
return True
else:
self.failed = True
self.message = err.strip()
raise HomebrewException(self.message)
# /_upgrade_all -------------------------- }}}
# installed ------------------------------ {{{
def _install_current_package(self):
if not self.valid_package(self.current_package):
self.failed = True
self.message = 'Invalid package: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if self._current_package_is_installed():
self.unchanged_count += 1
self.message = 'Package already installed: {0}'.format(
self.current_package,
)
return True
if self.module.check_mode:
self.changed = True
self.message = 'Package would be installed: {0}'.format(
self.current_package
)
raise HomebrewException(self.message)
if self.state == 'head':
head = '--HEAD'
else:
head = None
opts = (
[self.brew_path, 'install']
+ self.install_options
+ [self.current_package, head]
)
cmd = [opt for opt in opts if opt]
rc, out, err = self.module.run_command(cmd)
if self._current_package_is_installed():
self.changed_count += 1
self.changed = True
self.message = 'Package installed: {0}'.format(self.current_package)
return True
else:
self.failed = True
self.message = err.strip()
raise HomebrewException(self.message)
def _install_packages(self):
for package in self.packages:
self.current_package = package
self._install_current_package()
return True
# /installed ----------------------------- }}}
# upgraded ------------------------------- {{{
def _upgrade_current_package(self):
command = 'upgrade'
if not self.valid_package(self.current_package):
self.failed = True
self.message = 'Invalid package: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if not self._current_package_is_installed():
command = 'install'
if self._current_package_is_installed() and not self._current_package_is_outdated():
self.message = 'Package is already upgraded: {0}'.format(
self.current_package,
)
self.unchanged_count += 1
return True
if self.module.check_mode:
self.changed = True
self.message = 'Package would be upgraded: {0}'.format(
self.current_package
)
raise HomebrewException(self.message)
opts = (
[self.brew_path, command]
+ self.install_options
+ [self.current_package]
)
cmd = [opt for opt in opts if opt]
rc, out, err = self.module.run_command(cmd)
if self._current_package_is_installed() and not self._current_package_is_outdated():
self.changed_count += 1
self.changed = True
self.message = 'Package upgraded: {0}'.format(self.current_package)
return True
else:
self.failed = True
self.message = err.strip()
raise HomebrewException(self.message)
def _upgrade_all_packages(self):
opts = (
[self.brew_path, 'upgrade']
+ self.install_options
)
cmd = [opt for opt in opts if opt]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
self.changed = True
self.message = 'All packages upgraded.'
return True
else:
self.failed = True
self.message = err.strip()
raise HomebrewException(self.message)
def _upgrade_packages(self):
if not self.packages:
self._upgrade_all_packages()
else:
for package in self.packages:
self.current_package = package
self._upgrade_current_package()
return True
# /upgraded ------------------------------ }}}
# uninstalled ---------------------------- {{{
def _uninstall_current_package(self):
if not self.valid_package(self.current_package):
self.failed = True
self.message = 'Invalid package: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if not self._current_package_is_installed():
self.unchanged_count += 1
self.message = 'Package already uninstalled: {0}'.format(
self.current_package,
)
return True
if self.module.check_mode:
self.changed = True
self.message = 'Package would be uninstalled: {0}'.format(
self.current_package
)
raise HomebrewException(self.message)
opts = (
[self.brew_path, 'uninstall', '--force']
+ self.install_options
+ [self.current_package]
)
cmd = [opt for opt in opts if opt]
rc, out, err = self.module.run_command(cmd)
if not self._current_package_is_installed():
self.changed_count += 1
self.changed = True
self.message = 'Package uninstalled: {0}'.format(self.current_package)
return True
else:
self.failed = True
self.message = err.strip()
raise HomebrewException(self.message)
def _uninstall_packages(self):
for package in self.packages:
self.current_package = package
self._uninstall_current_package()
return True
# /uninstalled ----------------------------- }}}
# linked --------------------------------- {{{
def _link_current_package(self):
if not self.valid_package(self.current_package):
self.failed = True
self.message = 'Invalid package: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if not self._current_package_is_installed():
self.failed = True
self.message = 'Package not installed: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if self.module.check_mode:
self.changed = True
self.message = 'Package would be linked: {0}'.format(
self.current_package
)
raise HomebrewException(self.message)
opts = (
[self.brew_path, 'link']
+ self.install_options
+ [self.current_package]
)
cmd = [opt for opt in opts if opt]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
self.changed_count += 1
self.changed = True
self.message = 'Package linked: {0}'.format(self.current_package)
return True
else:
self.failed = True
self.message = 'Package could not be linked: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
def _link_packages(self):
for package in self.packages:
self.current_package = package
self._link_current_package()
return True
# /linked -------------------------------- }}}
# unlinked ------------------------------- {{{
def _unlink_current_package(self):
if not self.valid_package(self.current_package):
self.failed = True
self.message = 'Invalid package: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if not self._current_package_is_installed():
self.failed = True
self.message = 'Package not installed: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
if self.module.check_mode:
self.changed = True
self.message = 'Package would be unlinked: {0}'.format(
self.current_package
)
raise HomebrewException(self.message)
opts = (
[self.brew_path, 'unlink']
+ self.install_options
+ [self.current_package]
)
cmd = [opt for opt in opts if opt]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
self.changed_count += 1
self.changed = True
self.message = 'Package unlinked: {0}'.format(self.current_package)
return True
else:
self.failed = True
self.message = 'Package could not be unlinked: {0}.'.format(self.current_package)
raise HomebrewException(self.message)
def _unlink_packages(self):
for package in self.packages:
self.current_package = package
self._unlink_current_package()
return True
# /unlinked ------------------------------ }}}
# /commands ---------------------------------------------------- }}}
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(
aliases=["pkg", "package", "formula"],
required=False,
type='list',
elements='str',
),
path=dict(
default="/usr/local/bin",
required=False,
type='path',
),
state=dict(
default="present",
choices=[
"present", "installed",
"latest", "upgraded", "head",
"linked", "unlinked",
"absent", "removed", "uninstalled",
],
),
update_homebrew=dict(
default=False,
aliases=["update-brew"],
type='bool',
),
upgrade_all=dict(
default=False,
aliases=["upgrade"],
type='bool',
),
install_options=dict(
default=None,
aliases=['options'],
type='list',
)
),
supports_check_mode=True,
)
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
p = module.params
if p['name']:
packages = p['name']
else:
packages = None
path = p['path']
if path:
path = path.split(':')
state = p['state']
if state in ('present', 'installed'):
state = 'installed'
if state in ('head', ):
state = 'head'
if state in ('latest', 'upgraded'):
state = 'upgraded'
if state == 'linked':
state = 'linked'
if state == 'unlinked':
state = 'unlinked'
if state in ('absent', 'removed', 'uninstalled'):
state = 'absent'
update_homebrew = p['update_homebrew']
upgrade_all = p['upgrade_all']
p['install_options'] = p['install_options'] or []
install_options = ['--{0}'.format(install_option)
for install_option in p['install_options']]
brew = Homebrew(module=module, path=path, packages=packages,
state=state, update_homebrew=update_homebrew,
upgrade_all=upgrade_all, install_options=install_options)
(failed, changed, message) = brew.run()
if failed:
module.fail_json(msg=message)
else:
module.exit_json(changed=changed, msg=message)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,955 |
Optional strip on ios_banner
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The ios_banner currently strips any text passed to it, making certain banner formats impossible to replicate properly.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_banner
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Add a boolean switch to the module params (`strip`, perhaps) that would conditionally strip or not the text passed from the param. Could be optional and default to `True`.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ios_banner:
strip: no
banner: login
present: yes
text: "{{ lookup('template', 'banner.j2') }}"
```
with the `banner.j2` looking like
```
WARNING
This is a private system if you
do not belong here, bugger off
or something.
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61955
|
https://github.com/ansible/ansible/pull/62573
|
da07b98b7a433493728ddb7ac7efbd20b8988776
|
52f3ce8a808f943561803bd664e695fed1841fe8
| 2019-09-06T21:51:14Z |
python
| 2019-12-27T07:13:09Z |
lib/ansible/modules/network/ios/ios_banner.py
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: ios_banner
version_added: "2.3"
author: "Ricardo Carrillo Cruz (@rcarrillocruz)"
short_description: Manage multiline banners on Cisco IOS devices
description:
- This will configure both login and motd banners on remote devices
running Cisco IOS. It allows playbooks to add or remote
banner text from the active running configuration.
extends_documentation_fragment: ios
notes:
- Tested against IOS 15.6
options:
banner:
description:
- Specifies which banner should be configured on the remote device.
In Ansible 2.4 and earlier only I(login) and I(motd) were supported.
required: true
choices: ['login', 'motd', 'exec', 'incoming', 'slip-ppp']
text:
description:
- The banner text that should be
present in the remote device running configuration. This argument
accepts a multiline string, with no empty lines. Requires I(state=present).
state:
description:
- Specifies whether or not the configuration is
present in the current devices active running configuration.
default: present
choices: ['present', 'absent']
"""
EXAMPLES = """
- name: configure the login banner
ios_banner:
banner: login
text: |
this is my login banner
that contains a multiline
string
state: present
- name: remove the motd banner
ios_banner:
banner: motd
state: absent
- name: Configure banner from file
ios_banner:
banner: motd
text: "{{ lookup('file', './config_partial/raw_banner.cfg') }}"
state: present
"""
RETURN = """
commands:
description: The list of configuration mode commands to send to the device
returned: always
type: list
sample:
- banner login
- this is my login banner
- that contains a multiline
- string
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import exec_command
from ansible.module_utils.network.ios.ios import load_config
from ansible.module_utils.network.ios.ios import ios_argument_spec
import re
def map_obj_to_commands(updates, module):
commands = list()
want, have = updates
state = module.params['state']
if state == 'absent' and 'text' in have.keys() and have['text']:
commands.append('no banner %s' % module.params['banner'])
elif state == 'present':
if want['text'] and (want['text'] != have.get('text')):
banner_cmd = 'banner %s' % module.params['banner']
banner_cmd += ' @\n'
banner_cmd += want['text'].strip()
banner_cmd += '\n@'
commands.append(banner_cmd)
return commands
def map_config_to_obj(module):
rc, out, err = exec_command(module, 'show banner %s' % module.params['banner'])
if rc == 0:
output = out
else:
rc, out, err = exec_command(module,
'show running-config | begin banner %s'
% module.params['banner'])
if out:
output = re.search(r'\^C(.*?)\^C', out, re.S).group(1).strip()
else:
output = None
obj = {'banner': module.params['banner'], 'state': 'absent'}
if output:
obj['text'] = output
obj['state'] = 'present'
return obj
def map_params_to_obj(module):
text = module.params['text']
if text:
text = str(text).strip()
return {
'banner': module.params['banner'],
'text': text,
'state': module.params['state']
}
def main():
""" main entry point for module execution
"""
argument_spec = dict(
banner=dict(required=True, choices=['login', 'motd', 'exec', 'incoming', 'slip-ppp']),
text=dict(),
state=dict(default='present', choices=['present', 'absent'])
)
argument_spec.update(ios_argument_spec)
required_if = [('state', 'present', ('text',))]
module = AnsibleModule(argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True)
warnings = list()
result = {'changed': False}
if warnings:
result['warnings'] = warnings
want = map_params_to_obj(module)
have = map_config_to_obj(module)
commands = map_obj_to_commands((want, have), module)
result['commands'] = commands
if commands:
if not module.check_mode:
load_config(module, commands)
result['changed'] = True
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,955 |
Optional strip on ios_banner
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The ios_banner currently strips any text passed to it, making certain banner formats impossible to replicate properly.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_banner
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Add a boolean switch to the module params (`strip`, perhaps) that would conditionally strip or not the text passed from the param. Could be optional and default to `True`.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ios_banner:
strip: no
banner: login
present: yes
text: "{{ lookup('template', 'banner.j2') }}"
```
with the `banner.j2` looking like
```
WARNING
This is a private system if you
do not belong here, bugger off
or something.
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61955
|
https://github.com/ansible/ansible/pull/62573
|
da07b98b7a433493728ddb7ac7efbd20b8988776
|
52f3ce8a808f943561803bd664e695fed1841fe8
| 2019-09-06T21:51:14Z |
python
| 2019-12-27T07:13:09Z |
test/units/modules/network/ios/test_ios_banner.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import patch
from ansible.modules.network.ios import ios_banner
from units.modules.utils import set_module_args
from .ios_module import TestIosModule, load_fixture
class TestIosBannerModule(TestIosModule):
module = ios_banner
def setUp(self):
super(TestIosBannerModule, self).setUp()
self.mock_exec_command = patch('ansible.modules.network.ios.ios_banner.exec_command')
self.exec_command = self.mock_exec_command.start()
self.mock_load_config = patch('ansible.modules.network.ios.ios_banner.load_config')
self.load_config = self.mock_load_config.start()
def tearDown(self):
super(TestIosBannerModule, self).tearDown()
self.mock_exec_command.stop()
self.mock_load_config.stop()
def load_fixtures(self, commands=None):
self.exec_command.return_value = (0, load_fixture('ios_banner_show_banner.txt').strip(), None)
self.load_config.return_value = dict(diff=None, session='session')
def test_ios_banner_create(self):
for banner_type in ('login', 'motd', 'exec', 'incoming', 'slip-ppp'):
set_module_args(dict(banner=banner_type, text='test\nbanner\nstring'))
commands = ['banner {0} @\ntest\nbanner\nstring\n@'.format(banner_type)]
self.execute_module(changed=True, commands=commands)
def test_ios_banner_remove(self):
set_module_args(dict(banner='login', state='absent'))
commands = ['no banner login']
self.execute_module(changed=True, commands=commands)
def test_ios_banner_nochange(self):
banner_text = load_fixture('ios_banner_show_banner.txt').strip()
set_module_args(dict(banner='login', text=banner_text))
self.execute_module()
class TestIosBannerIos12Module(TestIosBannerModule):
def load_fixtures(self, commands):
show_banner_return_value = (1, '', None)
show_running_config_return_value = \
(0, load_fixture('ios_banner_show_running_config_ios12.txt').strip(), None)
self.exec_command.side_effect = [show_banner_return_value,
show_running_config_return_value]
def test_ios_banner_nochange(self):
banner_text = load_fixture('ios_banner_show_banner.txt').strip()
set_module_args(dict(banner='exec', text=banner_text))
self.execute_module()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,515 |
ios_vlans throws an error
|
##### SUMMARY
The play
```
- name: Delete all VLANs
ios_vlans:
state: deleted
```
Throws an error when executed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module: ios_vlans
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/misch/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
```
---
- hosts: sccs-2ug-07
tasks:
- name: Delete all VLANs
ios_vlans:
state: deleted
```
##### OS / ENVIRONMENT
CentOS 8, Cisco C9200L
##### STEPS TO REPRODUCE
```
$ ansible-playbook ios_vlans.yml
PLAY [sccs-2ug-07] ******************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************
[WARNING]: Ignoring timeout(10) for ios_facts
ok: [sccs-2ug-07]
TASK [Delete all VLANs] *************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'filter' object is not subscriptable
fatal: [sccs-2ug-07]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/home/misch/.ansible/tmp/ansible-local-112400ljdojbw/ansible-tmp-1573048754.5352461-196207205443014/AnsiballZ_ios_vlans.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/misch/.ansible/tmp/ansible-local-112400ljdojbw/ansible-tmp-1573048754.5352461-196207205443014/AnsiballZ_ios_vlans.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/misch/.ansible/tmp/ansible-local-112400ljdojbw/ansible-tmp-1573048754.5352461-196207205443014/AnsiballZ_ios_vlans.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.ios.ios_vlans', init_globals=None, run_name='__main__', alter_sys=False)\n File \"/usr/lib64/python3.6/runpy.py\", line 208, in run_module\n return _run_code(code, {}, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/modules/network/ios/ios_vlans.py\", line 464, in <module>\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/modules/network/ios/ios_vlans.py\", line 459, in main\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/module_utils/network/ios/config/vlans/vlans.py\", line 63, in execute_module\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/module_utils/network/ios/config/vlans/vlans.py\", line 47, in get_interfaces_facts\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/module_utils/network/ios/facts/facts.py\", line 68, in get_facts\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 105, in get_network_resources_facts\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/module_utils/network/ios/facts/vlans/vlans.py\", line 69, in populate_facts\n File \"/tmp/ansible_ios_vlans_payload_8jji30h2/ansible_ios_vlans_payload.zip/ansible/module_utils/network/ios/facts/vlans/vlans.py\", line 115, in render_config\nTypeError: 'filter' object is not subscriptable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP **************************************************************************************************************
sccs-2ug-07 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
should delete all VLANs on Cisco switch.
##### ACTUAL RESULTS
See error message above.
For comparison: A
```
- name: Create VLANs
ios_vlan:
vlan_id: 100
name: VLAN_100
state: present
```
works flawless
|
https://github.com/ansible/ansible/issues/64515
|
https://github.com/ansible/ansible/pull/64633
|
b733178bac980fba0af5dda9ff1218ca7e2c2676
|
c5a2ba5217dbb75998f61d86b83700b912f29054
| 2019-11-06T14:02:10Z |
python
| 2019-12-27T15:19:29Z |
lib/ansible/module_utils/network/ios/facts/vlans/vlans.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The ios vlans fact class
It is in this file the configuration is collected from the device
for a given resource, parsed, and the facts tree is populated
based on the configuration.
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from copy import deepcopy
from ansible.module_utils.network.common import utils
from ansible.module_utils.network.ios.argspec.vlans.vlans import VlansArgs
class VlansFacts(object):
""" The ios vlans fact class
"""
def __init__(self, module, subspec='config', options='options'):
self._module = module
self.argument_spec = VlansArgs.argument_spec
spec = deepcopy(self.argument_spec)
if subspec:
if options:
facts_argument_spec = spec[subspec][options]
else:
facts_argument_spec = spec[subspec]
else:
facts_argument_spec = spec
self.generated_spec = utils.generate_dict(facts_argument_spec)
def populate_facts(self, connection, ansible_facts, data=None):
""" Populate the facts for vlans
:param connection: the device connection
:param ansible_facts: Facts dictionary
:param data: previously collected conf
:rtype: dictionary
:returns: facts
"""
if connection:
pass
objs = []
mtu_objs = []
remote_objs = []
final_objs = []
if not data:
data = connection.get('show vlan')
# operate on a collection of resource x
config = data.split('\n')
# Get individual vlan configs separately
vlan_info = ''
for conf in config:
if 'Name' in conf:
vlan_info = 'Name'
elif 'Type' in conf:
vlan_info = 'Type'
elif 'Remote' in conf:
vlan_info = 'Remote'
if conf and ' ' not in filter(None, conf.split('-')):
obj = self.render_config(self.generated_spec, conf, vlan_info)
if 'mtu' in obj:
mtu_objs.append(obj)
elif 'remote_span' in obj:
remote_objs = obj
elif obj:
objs.append(obj)
# Appending MTU value to the retrieved dictionary
for o, m in zip(objs, mtu_objs):
o.update(m)
final_objs.append(o)
# Appending Remote Span value to related VLAN:
if remote_objs:
if remote_objs.get('remote_span'):
for each in remote_objs.get('remote_span'):
for every in final_objs:
if each == every.get('vlan_id'):
every.update({'remote_span': True})
break
facts = {}
if final_objs:
facts['vlans'] = []
params = utils.validate_config(self.argument_spec, {'config': objs})
for cfg in params['config']:
facts['vlans'].append(utils.remove_empties(cfg))
ansible_facts['ansible_network_resources'].update(facts)
return ansible_facts
def render_config(self, spec, conf, vlan_info):
"""
Render config as dictionary structure and delete keys
from spec for null values
:param spec: The facts tree, generated from the argspec
:param conf: The configuration
:rtype: dictionary
:returns: The generated config
"""
config = deepcopy(spec)
if vlan_info == 'Name' and 'Name' not in conf:
conf = filter(None, conf.split(' '))
config['vlan_id'] = int(conf[0])
config['name'] = conf[1]
if len(conf[2].split('/')) > 1:
if conf[2].split('/')[0] == 'sus':
config['state'] = 'suspend'
elif conf[2].split('/')[0] == 'act':
config['state'] = 'active'
config['shutdown'] = 'enabled'
else:
if conf[2] == 'suspended':
config['state'] = 'suspend'
elif conf[2] == 'active':
config['state'] = 'active'
config['shutdown'] = 'disabled'
elif vlan_info == 'Type' and 'Type' not in conf:
conf = filter(None, conf.split(' '))
config['mtu'] = int(conf[3])
elif vlan_info == 'Remote':
if len(conf.split(',')) > 1 or conf.isdigit():
remote_span_vlan = []
if len(conf.split(',')) > 1:
remote_span_vlan = conf.split(',')
else:
remote_span_vlan.append(conf)
remote_span = []
for each in remote_span_vlan:
remote_span.append(int(each))
config['remote_span'] = remote_span
return utils.remove_empties(config)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,815 |
docker_network with multiple subnets always changes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using `docker_network` to create a network with multiple subnets, the task will delete/create the network even if it already exists with the correct subnets. Ansible fails to judge if the existing subnets are correct, probably because of the way the arrays of subnets are compared in python.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = True
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Both systems are running ArchLinux.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "deploy network namespace that can hold all IPs"
docker_network:
name: "macvlan1"
driver: "macvlan"
internal: false
driver_options:
parent: "{{ ansible_default_ipv4.alias }}"
ipam_config: "{{ macvlan_subnets }}"
```
also vars:
```
macvlan_subnets:
- gateway: 10.162.208.1
subnet: 10.162.208.0/24
- gateway: 10.162.223.1
subnet: 10.162.223.0/24
- gateway: 10.162.210.1
subnet: 10.162.210.0/24
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I was expecting to run the play 10 times and get Changed only on the first run and OK on the other 9 runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The docker network ALWAYS changes, even if the subnets are correct on the server, causing all docker containers on the network to disconnect. This will cause downtime for all the services that run on the node.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gen4 : deploy network namespace that can hold all IPs] ****************************************************************
--- before
+++ after
@@ -1,19 +1,19 @@
{
- "connected.10.162.208.129": false,
- "connected.10.162.210.161": false,
- "connected.10.162.210.169": false,
- "connected.10.162.210.170": false,
- "connected.10.162.210.171": false,
- "connected.10.162.210.172": false,
- "connected.10.162.210.173": false,
- "connected.10.162.223.72": false,
- "connected.10.162.223.73": false,
- "connected.10.162.223.74": false,
- "connected.10.162.223.75": false,
- "connected.10.162.223.76": false,
+ "connected.10.162.208.129": true,
+ "connected.10.162.210.161": true,
+ "connected.10.162.210.169": true,
+ "connected.10.162.210.170": true,
+ "connected.10.162.210.171": true,
+ "connected.10.162.210.172": true,
+ "connected.10.162.210.173": true,
+ "connected.10.162.223.72": true,
+ "connected.10.162.223.73": true,
+ "connected.10.162.223.74": true,
+ "connected.10.162.223.75": true,
+ "connected.10.162.223.76": true,
"exists": true,
- "ipam_config[0].gateway": "10.162.210.1",
- "ipam_config[0].subnet": "10.162.210.0/24",
- "ipam_config[1].gateway": "10.162.210.1",
- "ipam_config[1].subnet": "10.162.210.0/24"
+ "ipam_config[0].gateway": "10.162.208.1",
+ "ipam_config[0].subnet": "10.162.208.0/24",
+ "ipam_config[1].gateway": "10.162.223.1",
+ "ipam_config[1].subnet": "10.162.223.0/24"
}
changed: [server1337.gun1x]
```
|
https://github.com/ansible/ansible/issues/65815
|
https://github.com/ansible/ansible/pull/65839
|
30cfa92e90b3778b4edb6a8828f0b5f0e1b6356f
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
| 2019-12-13T17:51:58Z |
python
| 2019-12-29T22:16:17Z |
changelogs/fragments/65839-docker_network-idempotence.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,815 |
docker_network with multiple subnets always changes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using `docker_network` to create a network with multiple subnets, the task will delete/create the network even if it already exists with the correct subnets. Ansible fails to judge if the existing subnets are correct, probably because of the way the arrays of subnets are compared in python.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = True
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Both systems are running ArchLinux.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "deploy network namespace that can hold all IPs"
docker_network:
name: "macvlan1"
driver: "macvlan"
internal: false
driver_options:
parent: "{{ ansible_default_ipv4.alias }}"
ipam_config: "{{ macvlan_subnets }}"
```
also vars:
```
macvlan_subnets:
- gateway: 10.162.208.1
subnet: 10.162.208.0/24
- gateway: 10.162.223.1
subnet: 10.162.223.0/24
- gateway: 10.162.210.1
subnet: 10.162.210.0/24
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I was expecting to run the play 10 times and get Changed only on the first run and OK on the other 9 runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The docker network ALWAYS changes, even if the subnets are correct on the server, causing all docker containers on the network to disconnect. This will cause downtime for all the services that run on the node.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gen4 : deploy network namespace that can hold all IPs] ****************************************************************
--- before
+++ after
@@ -1,19 +1,19 @@
{
- "connected.10.162.208.129": false,
- "connected.10.162.210.161": false,
- "connected.10.162.210.169": false,
- "connected.10.162.210.170": false,
- "connected.10.162.210.171": false,
- "connected.10.162.210.172": false,
- "connected.10.162.210.173": false,
- "connected.10.162.223.72": false,
- "connected.10.162.223.73": false,
- "connected.10.162.223.74": false,
- "connected.10.162.223.75": false,
- "connected.10.162.223.76": false,
+ "connected.10.162.208.129": true,
+ "connected.10.162.210.161": true,
+ "connected.10.162.210.169": true,
+ "connected.10.162.210.170": true,
+ "connected.10.162.210.171": true,
+ "connected.10.162.210.172": true,
+ "connected.10.162.210.173": true,
+ "connected.10.162.223.72": true,
+ "connected.10.162.223.73": true,
+ "connected.10.162.223.74": true,
+ "connected.10.162.223.75": true,
+ "connected.10.162.223.76": true,
"exists": true,
- "ipam_config[0].gateway": "10.162.210.1",
- "ipam_config[0].subnet": "10.162.210.0/24",
- "ipam_config[1].gateway": "10.162.210.1",
- "ipam_config[1].subnet": "10.162.210.0/24"
+ "ipam_config[0].gateway": "10.162.208.1",
+ "ipam_config[0].subnet": "10.162.208.0/24",
+ "ipam_config[1].gateway": "10.162.223.1",
+ "ipam_config[1].subnet": "10.162.223.0/24"
}
changed: [server1337.gun1x]
```
|
https://github.com/ansible/ansible/issues/65815
|
https://github.com/ansible/ansible/pull/65839
|
30cfa92e90b3778b4edb6a8828f0b5f0e1b6356f
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
| 2019-12-13T17:51:58Z |
python
| 2019-12-29T22:16:17Z |
lib/ansible/modules/cloud/docker/docker_network.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: docker_network
version_added: "2.2"
short_description: Manage Docker networks
description:
- Create/remove Docker networks and connect containers to them.
- Performs largely the same function as the "docker network" CLI subcommand.
options:
name:
description:
- Name of the network to operate on.
type: str
required: yes
aliases:
- network_name
connected:
description:
- List of container names or container IDs to connect to a network.
- Please note that the module only makes sure that these containers are connected to the network,
but does not care about connection options. If you rely on specific IP addresses etc., use the
M(docker_container) module to ensure your containers are correctly connected to this network.
type: list
elements: str
aliases:
- containers
driver:
description:
- Specify the type of network. Docker provides bridge and overlay drivers, but 3rd party drivers can also be used.
type: str
default: bridge
driver_options:
description:
- Dictionary of network settings. Consult docker docs for valid options and values.
type: dict
force:
description:
- With state C(absent) forces disconnecting all containers from the
network prior to deleting the network. With state C(present) will
disconnect all containers, delete the network and re-create the
network.
- This option is required if you have changed the IPAM or driver options
and want an existing network to be updated to use the new options.
type: bool
default: no
appends:
description:
- By default the connected list is canonical, meaning containers not on the list are removed from the network.
- Use I(appends) to leave existing containers connected.
type: bool
default: no
aliases:
- incremental
enable_ipv6:
description:
- Enable IPv6 networking.
type: bool
version_added: "2.8"
ipam_driver:
description:
- Specify an IPAM driver.
type: str
ipam_driver_options:
description:
- Dictionary of IPAM driver options.
type: dict
version_added: "2.8"
ipam_options:
description:
- Dictionary of IPAM options.
- Deprecated in 2.8, will be removed in 2.12. Use parameter I(ipam_config) instead. In Docker 1.10.0, IPAM
options were introduced (see L(here,https://github.com/moby/moby/pull/17316)). This module parameter addresses
the IPAM config not the newly introduced IPAM options. For the IPAM options, see the I(ipam_driver_options)
parameter.
type: dict
suboptions:
subnet:
description:
- IP subset in CIDR notation.
type: str
iprange:
description:
- IP address range in CIDR notation.
type: str
gateway:
description:
- IP gateway address.
type: str
aux_addresses:
description:
- Auxiliary IP addresses used by Network driver, as a mapping from hostname to IP.
type: dict
ipam_config:
description:
- List of IPAM config blocks. Consult
L(Docker docs,https://docs.docker.com/compose/compose-file/compose-file-v2/#ipam) for valid options and values.
Note that I(iprange) is spelled differently here (we use the notation from the Docker SDK for Python).
type: list
elements: dict
suboptions:
subnet:
description:
- IP subset in CIDR notation.
type: str
iprange:
description:
- IP address range in CIDR notation.
type: str
gateway:
description:
- IP gateway address.
type: str
aux_addresses:
description:
- Auxiliary IP addresses used by Network driver, as a mapping from hostname to IP.
type: dict
version_added: "2.8"
state:
description:
- C(absent) deletes the network. If a network has connected containers, it
cannot be deleted. Use the I(force) option to disconnect all containers
and delete the network.
- C(present) creates the network, if it does not already exist with the
specified parameters, and connects the list of containers provided via
the connected parameter. Containers not on the list will be disconnected.
An empty list will leave no containers connected to the network. Use the
I(appends) option to leave existing containers connected. Use the I(force)
options to force re-creation of the network.
type: str
default: present
choices:
- absent
- present
internal:
description:
- Restrict external access to the network.
type: bool
version_added: "2.8"
labels:
description:
- Dictionary of labels.
type: dict
version_added: "2.8"
scope:
description:
- Specify the network's scope.
type: str
choices:
- local
- global
- swarm
version_added: "2.8"
attachable:
description:
- If enabled, and the network is in the global scope, non-service containers on worker nodes will be able to connect to the network.
type: bool
version_added: "2.8"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
notes:
- When network options are changed, the module disconnects all containers from the network, deletes the network, and re-creates the network.
It does not try to reconnect containers, except the ones listed in (I(connected), and even for these, it does not consider specific
connection options like fixed IP addresses or MAC addresses. If you need more control over how the containers are connected to the
network, loop the M(docker_container) module to loop over your containers to make sure they are connected properly.
- The module does not support Docker Swarm, i.e. it will not try to disconnect or reconnect services. If services are connected to the
network, deleting the network will fail. When network options are changed, the network has to be deleted and recreated, so this will
fail as well.
author:
- "Ben Keith (@keitwb)"
- "Chris Houseknecht (@chouseknecht)"
- "Dave Bendit (@DBendit)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.10.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "The docker server >= 1.10.0"
'''
EXAMPLES = '''
- name: Create a network
docker_network:
name: network_one
- name: Remove all but selected list of containers
docker_network:
name: network_one
connected:
- container_a
- container_b
- container_c
- name: Remove a single container
docker_network:
name: network_one
connected: "{{ fulllist|difference(['container_a']) }}"
- name: Add a container to a network, leaving existing containers connected
docker_network:
name: network_one
connected:
- container_a
appends: yes
- name: Create a network with driver options
docker_network:
name: network_two
driver_options:
com.docker.network.bridge.name: net2
- name: Create a network with custom IPAM config
docker_network:
name: network_three
ipam_config:
- subnet: 172.3.27.0/24
gateway: 172.3.27.2
iprange: 172.3.27.0/26
aux_addresses:
host1: 172.3.27.3
host2: 172.3.27.4
- name: Create a network with labels
docker_network:
name: network_four
labels:
key1: value1
key2: value2
- name: Create a network with IPv6 IPAM config
docker_network:
name: network_ipv6_one
enable_ipv6: yes
ipam_config:
- subnet: fdd1:ac8c:0557:7ce1::/64
- name: Create a network with IPv6 and custom IPv4 IPAM config
docker_network:
name: network_ipv6_two
enable_ipv6: yes
ipam_config:
- subnet: 172.4.27.0/24
- subnet: fdd1:ac8c:0557:7ce2::/64
- name: Delete a network, disconnecting all containers
docker_network:
name: network_one
state: absent
force: yes
'''
RETURN = '''
network:
description:
- Network inspection results for the affected network.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_network). Note that the returned fact will be removed in Ansible 2.12.
returned: success
type: dict
sample: {}
'''
import re
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DockerBaseClass,
docker_version,
DifferenceTracker,
clean_dict_booleans_for_docker_api,
RequestException,
)
try:
from docker import utils
from docker.errors import DockerException
if LooseVersion(docker_version) >= LooseVersion('2.0.0'):
from docker.types import IPAMPool, IPAMConfig
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
class TaskParameters(DockerBaseClass):
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.name = None
self.connected = None
self.driver = None
self.driver_options = None
self.ipam_driver = None
self.ipam_driver_options = None
self.ipam_options = None
self.ipam_config = None
self.appends = None
self.force = None
self.internal = None
self.labels = None
self.debug = None
self.enable_ipv6 = None
self.scope = None
self.attachable = None
for key, value in client.module.params.items():
setattr(self, key, value)
def container_names_in_network(network):
return [c['Name'] for c in network['Containers'].values()] if network['Containers'] else []
CIDR_IPV4 = re.compile(r'^([0-9]{1,3}\.){3}[0-9]{1,3}/([0-9]|[1-2][0-9]|3[0-2])$')
CIDR_IPV6 = re.compile(r'^[0-9a-fA-F:]+/([0-9]|[1-9][0-9]|1[0-2][0-9])$')
def get_ip_version(cidr):
"""Gets the IP version of a CIDR string
:param cidr: Valid CIDR
:type cidr: str
:return: ``ipv4`` or ``ipv6``
:rtype: str
:raises ValueError: If ``cidr`` is not a valid CIDR
"""
if CIDR_IPV4.match(cidr):
return 'ipv4'
elif CIDR_IPV6.match(cidr):
return 'ipv6'
raise ValueError('"{0}" is not a valid CIDR'.format(cidr))
def normalize_ipam_config_key(key):
"""Normalizes IPAM config keys returned by Docker API to match Ansible keys
:param key: Docker API key
:type key: str
:return Ansible module key
:rtype str
"""
special_cases = {
'AuxiliaryAddresses': 'aux_addresses'
}
return special_cases.get(key, key.lower())
class DockerNetworkManager(object):
def __init__(self, client):
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {
u'changed': False,
u'actions': []
}
self.diff = self.client.module._diff
self.diff_tracker = DifferenceTracker()
self.diff_result = dict()
self.existing_network = self.get_existing_network()
if not self.parameters.connected and self.existing_network:
self.parameters.connected = container_names_in_network(self.existing_network)
if (self.parameters.ipam_options['subnet'] or self.parameters.ipam_options['iprange'] or
self.parameters.ipam_options['gateway'] or self.parameters.ipam_options['aux_addresses']):
self.parameters.ipam_config = [self.parameters.ipam_options]
if self.parameters.driver_options:
self.parameters.driver_options = clean_dict_booleans_for_docker_api(self.parameters.driver_options)
state = self.parameters.state
if state == 'present':
self.present()
elif state == 'absent':
self.absent()
if self.diff or self.check_mode or self.parameters.debug:
if self.diff:
self.diff_result['before'], self.diff_result['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff_result
def get_existing_network(self):
return self.client.get_network(name=self.parameters.name)
def has_different_config(self, net):
'''
Evaluates an existing network and returns a tuple containing a boolean
indicating if the configuration is different and a list of differences.
:param net: the inspection output for an existing network
:return: (bool, list)
'''
differences = DifferenceTracker()
if self.parameters.driver and self.parameters.driver != net['Driver']:
differences.add('driver',
parameter=self.parameters.driver,
active=net['Driver'])
if self.parameters.driver_options:
if not net.get('Options'):
differences.add('driver_options',
parameter=self.parameters.driver_options,
active=net.get('Options'))
else:
for key, value in self.parameters.driver_options.items():
if not (key in net['Options']) or value != net['Options'][key]:
differences.add('driver_options.%s' % key,
parameter=value,
active=net['Options'].get(key))
if self.parameters.ipam_driver:
if not net.get('IPAM') or net['IPAM']['Driver'] != self.parameters.ipam_driver:
differences.add('ipam_driver',
parameter=self.parameters.ipam_driver,
active=net.get('IPAM'))
if self.parameters.ipam_driver_options is not None:
ipam_driver_options = net['IPAM'].get('Options') or {}
if ipam_driver_options != self.parameters.ipam_driver_options:
differences.add('ipam_driver_options',
parameter=self.parameters.ipam_driver_options,
active=ipam_driver_options)
if self.parameters.ipam_config is not None and self.parameters.ipam_config:
if not net.get('IPAM') or not net['IPAM']['Config']:
differences.add('ipam_config',
parameter=self.parameters.ipam_config,
active=net.get('IPAM', {}).get('Config'))
else:
for idx, ipam_config in enumerate(self.parameters.ipam_config):
net_config = dict()
try:
ip_version = get_ip_version(ipam_config['subnet'])
for net_ipam_config in net['IPAM']['Config']:
if ip_version == get_ip_version(net_ipam_config['Subnet']):
net_config = net_ipam_config
except ValueError as e:
self.client.fail(str(e))
for key, value in ipam_config.items():
if value is None:
# due to recursive argument_spec, all keys are always present
# (but have default value None if not specified)
continue
camelkey = None
for net_key in net_config:
if key == normalize_ipam_config_key(net_key):
camelkey = net_key
break
if not camelkey or net_config.get(camelkey) != value:
differences.add('ipam_config[%s].%s' % (idx, key),
parameter=value,
active=net_config.get(camelkey) if camelkey else None)
if self.parameters.enable_ipv6 is not None and self.parameters.enable_ipv6 != net.get('EnableIPv6', False):
differences.add('enable_ipv6',
parameter=self.parameters.enable_ipv6,
active=net.get('EnableIPv6', False))
if self.parameters.internal is not None and self.parameters.internal != net.get('Internal', False):
differences.add('internal',
parameter=self.parameters.internal,
active=net.get('Internal'))
if self.parameters.scope is not None and self.parameters.scope != net.get('Scope'):
differences.add('scope',
parameter=self.parameters.scope,
active=net.get('Scope'))
if self.parameters.attachable is not None and self.parameters.attachable != net.get('Attachable', False):
differences.add('attachable',
parameter=self.parameters.attachable,
active=net.get('Attachable'))
if self.parameters.labels:
if not net.get('Labels'):
differences.add('labels',
parameter=self.parameters.labels,
active=net.get('Labels'))
else:
for key, value in self.parameters.labels.items():
if not (key in net['Labels']) or value != net['Labels'][key]:
differences.add('labels.%s' % key,
parameter=value,
active=net['Labels'].get(key))
return not differences.empty, differences
def create_network(self):
if not self.existing_network:
params = dict(
driver=self.parameters.driver,
options=self.parameters.driver_options,
)
ipam_pools = []
if self.parameters.ipam_config:
for ipam_pool in self.parameters.ipam_config:
if LooseVersion(docker_version) >= LooseVersion('2.0.0'):
ipam_pools.append(IPAMPool(**ipam_pool))
else:
ipam_pools.append(utils.create_ipam_pool(**ipam_pool))
if self.parameters.ipam_driver or self.parameters.ipam_driver_options or ipam_pools:
# Only add ipam parameter if a driver was specified or if IPAM parameters
# were specified. Leaving this parameter away can significantly speed up
# creation; on my machine creation with this option needs ~15 seconds,
# and without just a few seconds.
if LooseVersion(docker_version) >= LooseVersion('2.0.0'):
params['ipam'] = IPAMConfig(driver=self.parameters.ipam_driver,
pool_configs=ipam_pools,
options=self.parameters.ipam_driver_options)
else:
params['ipam'] = utils.create_ipam_config(driver=self.parameters.ipam_driver,
pool_configs=ipam_pools)
if self.parameters.enable_ipv6 is not None:
params['enable_ipv6'] = self.parameters.enable_ipv6
if self.parameters.internal is not None:
params['internal'] = self.parameters.internal
if self.parameters.scope is not None:
params['scope'] = self.parameters.scope
if self.parameters.attachable is not None:
params['attachable'] = self.parameters.attachable
if self.parameters.labels:
params['labels'] = self.parameters.labels
if not self.check_mode:
resp = self.client.create_network(self.parameters.name, **params)
self.client.report_warnings(resp, ['Warning'])
self.existing_network = self.client.get_network(network_id=resp['Id'])
self.results['actions'].append("Created network %s with driver %s" % (self.parameters.name, self.parameters.driver))
self.results['changed'] = True
def remove_network(self):
if self.existing_network:
self.disconnect_all_containers()
if not self.check_mode:
self.client.remove_network(self.parameters.name)
self.results['actions'].append("Removed network %s" % (self.parameters.name,))
self.results['changed'] = True
def is_container_connected(self, container_name):
if not self.existing_network:
return False
return container_name in container_names_in_network(self.existing_network)
def connect_containers(self):
for name in self.parameters.connected:
if not self.is_container_connected(name):
if not self.check_mode:
self.client.connect_container_to_network(name, self.parameters.name)
self.results['actions'].append("Connected container %s" % (name,))
self.results['changed'] = True
self.diff_tracker.add('connected.{0}'.format(name),
parameter=True,
active=False)
def disconnect_missing(self):
if not self.existing_network:
return
containers = self.existing_network['Containers']
if not containers:
return
for c in containers.values():
name = c['Name']
if name not in self.parameters.connected:
self.disconnect_container(name)
def disconnect_all_containers(self):
containers = self.client.get_network(name=self.parameters.name)['Containers']
if not containers:
return
for cont in containers.values():
self.disconnect_container(cont['Name'])
def disconnect_container(self, container_name):
if not self.check_mode:
self.client.disconnect_container_from_network(container_name, self.parameters.name)
self.results['actions'].append("Disconnected container %s" % (container_name,))
self.results['changed'] = True
self.diff_tracker.add('connected.{0}'.format(container_name),
parameter=False,
active=True)
def present(self):
different = False
differences = DifferenceTracker()
if self.existing_network:
different, differences = self.has_different_config(self.existing_network)
self.diff_tracker.add('exists', parameter=True, active=self.existing_network is not None)
if self.parameters.force or different:
self.remove_network()
self.existing_network = None
self.create_network()
self.connect_containers()
if not self.parameters.appends:
self.disconnect_missing()
if self.diff or self.check_mode or self.parameters.debug:
self.diff_result['differences'] = differences.get_legacy_docker_diffs()
self.diff_tracker.merge(differences)
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
network_facts = self.get_existing_network()
self.results['ansible_facts'] = {u'docker_network': network_facts}
self.results['network'] = network_facts
def absent(self):
self.diff_tracker.add('exists', parameter=False, active=self.existing_network is not None)
self.remove_network()
def main():
argument_spec = dict(
name=dict(type='str', required=True, aliases=['network_name']),
connected=dict(type='list', default=[], elements='str', aliases=['containers']),
state=dict(type='str', default='present', choices=['present', 'absent']),
driver=dict(type='str', default='bridge'),
driver_options=dict(type='dict', default={}),
force=dict(type='bool', default=False),
appends=dict(type='bool', default=False, aliases=['incremental']),
ipam_driver=dict(type='str'),
ipam_driver_options=dict(type='dict'),
ipam_options=dict(type='dict', default={}, options=dict(
subnet=dict(type='str'),
iprange=dict(type='str'),
gateway=dict(type='str'),
aux_addresses=dict(type='dict'),
), removed_in_version='2.12'),
ipam_config=dict(type='list', elements='dict', options=dict(
subnet=dict(type='str'),
iprange=dict(type='str'),
gateway=dict(type='str'),
aux_addresses=dict(type='dict'),
)),
enable_ipv6=dict(type='bool'),
internal=dict(type='bool'),
labels=dict(type='dict', default={}),
debug=dict(type='bool', default=False),
scope=dict(type='str', choices=['local', 'global', 'swarm']),
attachable=dict(type='bool'),
)
mutually_exclusive = [
('ipam_config', 'ipam_options')
]
option_minimal_versions = dict(
scope=dict(docker_py_version='2.6.0', docker_api_version='1.30'),
attachable=dict(docker_py_version='2.0.0', docker_api_version='1.26'),
labels=dict(docker_api_version='1.23'),
ipam_driver_options=dict(docker_py_version='2.0.0'),
)
client = AnsibleDockerClient(
argument_spec=argument_spec,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True,
min_docker_version='1.10.0',
min_docker_api_version='1.22',
# "The docker server >= 1.10.0"
option_minimal_versions=option_minimal_versions,
)
try:
cm = DockerNetworkManager(client)
client.module.exit_json(**cm.results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,815 |
docker_network with multiple subnets always changes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using `docker_network` to create a network with multiple subnets, the task will delete/create the network even if it already exists with the correct subnets. Ansible fails to judge if the existing subnets are correct, probably because of the way the arrays of subnets are compared in python.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = True
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Both systems are running ArchLinux.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "deploy network namespace that can hold all IPs"
docker_network:
name: "macvlan1"
driver: "macvlan"
internal: false
driver_options:
parent: "{{ ansible_default_ipv4.alias }}"
ipam_config: "{{ macvlan_subnets }}"
```
also vars:
```
macvlan_subnets:
- gateway: 10.162.208.1
subnet: 10.162.208.0/24
- gateway: 10.162.223.1
subnet: 10.162.223.0/24
- gateway: 10.162.210.1
subnet: 10.162.210.0/24
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I was expecting to run the play 10 times and get Changed only on the first run and OK on the other 9 runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The docker network ALWAYS changes, even if the subnets are correct on the server, causing all docker containers on the network to disconnect. This will cause downtime for all the services that run on the node.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gen4 : deploy network namespace that can hold all IPs] ****************************************************************
--- before
+++ after
@@ -1,19 +1,19 @@
{
- "connected.10.162.208.129": false,
- "connected.10.162.210.161": false,
- "connected.10.162.210.169": false,
- "connected.10.162.210.170": false,
- "connected.10.162.210.171": false,
- "connected.10.162.210.172": false,
- "connected.10.162.210.173": false,
- "connected.10.162.223.72": false,
- "connected.10.162.223.73": false,
- "connected.10.162.223.74": false,
- "connected.10.162.223.75": false,
- "connected.10.162.223.76": false,
+ "connected.10.162.208.129": true,
+ "connected.10.162.210.161": true,
+ "connected.10.162.210.169": true,
+ "connected.10.162.210.170": true,
+ "connected.10.162.210.171": true,
+ "connected.10.162.210.172": true,
+ "connected.10.162.210.173": true,
+ "connected.10.162.223.72": true,
+ "connected.10.162.223.73": true,
+ "connected.10.162.223.74": true,
+ "connected.10.162.223.75": true,
+ "connected.10.162.223.76": true,
"exists": true,
- "ipam_config[0].gateway": "10.162.210.1",
- "ipam_config[0].subnet": "10.162.210.0/24",
- "ipam_config[1].gateway": "10.162.210.1",
- "ipam_config[1].subnet": "10.162.210.0/24"
+ "ipam_config[0].gateway": "10.162.208.1",
+ "ipam_config[0].subnet": "10.162.208.0/24",
+ "ipam_config[1].gateway": "10.162.223.1",
+ "ipam_config[1].subnet": "10.162.223.0/24"
}
changed: [server1337.gun1x]
```
|
https://github.com/ansible/ansible/issues/65815
|
https://github.com/ansible/ansible/pull/65839
|
30cfa92e90b3778b4edb6a8828f0b5f0e1b6356f
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
| 2019-12-13T17:51:58Z |
python
| 2019-12-29T22:16:17Z |
test/integration/targets/docker_network/tasks/tests/ipam.yml
|
---
- name: Registering network names
set_fact:
nname_ipam_0: "{{ name_prefix ~ '-network-ipam-0' }}"
nname_ipam_1: "{{ name_prefix ~ '-network-ipam-1' }}"
nname_ipam_2: "{{ name_prefix ~ '-network-ipam-2' }}"
nname_ipam_3: "{{ name_prefix ~ '-network-ipam-3' }}"
- name: Registering network names
set_fact:
dnetworks: "{{ dnetworks + [nname_ipam_0, nname_ipam_1, nname_ipam_2, nname_ipam_3] }}"
#################### network-ipam-0 deprecated ####################
- name: Create network with ipam_config and deprecated ipam_options (conflicting)
docker_network:
name: "{{ nname_ipam_0 }}"
ipam_options:
subnet: 172.3.29.0/24
ipam_config:
- subnet: 172.3.29.0/24
register: network
ignore_errors: yes
- assert:
that:
- network is failed
- "network.msg == 'parameters are mutually exclusive: ipam_config|ipam_options'"
- name: Create network with deprecated custom IPAM options
docker_network:
name: "{{ nname_ipam_0 }}"
ipam_options:
subnet: 172.3.29.0/24
gateway: 172.3.29.2
iprange: 172.3.29.0/26
aux_addresses:
host1: 172.3.29.3
host2: 172.3.29.4
register: network
- assert:
that:
- network is changed
- name: Create network with deprecated custom IPAM options (idempotence)
docker_network:
name: "{{ nname_ipam_0 }}"
ipam_options:
subnet: 172.3.29.0/24
gateway: 172.3.29.2
iprange: 172.3.29.0/26
aux_addresses:
host1: 172.3.29.3
host2: 172.3.29.4
register: network
- assert:
that:
- network is not changed
- name: Change of network created with deprecated custom IPAM options
docker_network:
name: "{{ nname_ipam_0 }}"
ipam_options:
subnet: 172.3.28.0/24
gateway: 172.3.28.2
iprange: 172.3.28.0/26
aux_addresses:
host1: 172.3.28.3
register: network
diff: yes
- assert:
that:
- network is changed
- network.diff.differences | length == 4
- '"ipam_config[0].subnet" in network.diff.differences'
- '"ipam_config[0].gateway" in network.diff.differences'
- '"ipam_config[0].iprange" in network.diff.differences'
- '"ipam_config[0].aux_addresses" in network.diff.differences'
- name: Remove gateway and iprange of network with deprecated custom IPAM options
docker_network:
name: "{{ nname_ipam_0 }}"
ipam_options:
subnet: 172.3.28.0/24
register: network
- assert:
that:
- network is not changed
- name: Cleanup network with deprecated custom IPAM options
docker_network:
name: "{{ nname_ipam_0 }}"
state: absent
#################### network-ipam-1 ####################
- name: Create network with custom IPAM config
docker_network:
name: "{{ nname_ipam_1 }}"
ipam_config:
- subnet: 172.3.27.0/24
gateway: 172.3.27.2
iprange: 172.3.27.0/26
aux_addresses:
host1: 172.3.27.3
host2: 172.3.27.4
register: network
- assert:
that:
- network is changed
- name: Create network with custom IPAM config (idempotence)
docker_network:
name: "{{ nname_ipam_1 }}"
ipam_config:
- subnet: 172.3.27.0/24
gateway: 172.3.27.2
iprange: 172.3.27.0/26
aux_addresses:
host1: 172.3.27.3
host2: 172.3.27.4
register: network
- assert:
that:
- network is not changed
- name: Change of network created with custom IPAM config
docker_network:
name: "{{ nname_ipam_1 }}"
ipam_config:
- subnet: 172.3.28.0/24
gateway: 172.3.28.2
iprange: 172.3.28.0/26
aux_addresses:
host1: 172.3.28.3
register: network
diff: yes
- assert:
that:
- network is changed
- network.diff.differences | length == 4
- '"ipam_config[0].subnet" in network.diff.differences'
- '"ipam_config[0].gateway" in network.diff.differences'
- '"ipam_config[0].iprange" in network.diff.differences'
- '"ipam_config[0].aux_addresses" in network.diff.differences'
- name: Remove gateway and iprange of network with custom IPAM config
docker_network:
name: "{{ nname_ipam_1 }}"
ipam_config:
- subnet: 172.3.28.0/24
register: network
- assert:
that:
- network is not changed
- name: Cleanup network with custom IPAM config
docker_network:
name: "{{ nname_ipam_1 }}"
state: absent
#################### network-ipam-2 ####################
- name: Create network with IPv6 IPAM config
docker_network:
name: "{{ nname_ipam_2 }}"
enable_ipv6: yes
ipam_config:
- subnet: fdd1:ac8c:0557:7ce0::/64
register: network
- assert:
that:
- network is changed
- name: Create network with IPv6 IPAM config (idempotence)
docker_network:
name: "{{ nname_ipam_2 }}"
enable_ipv6: yes
ipam_config:
- subnet: fdd1:ac8c:0557:7ce0::/64
register: network
- assert:
that:
- network is not changed
- name: Change subnet of network with IPv6 IPAM config
docker_network:
name: "{{ nname_ipam_2 }}"
enable_ipv6: yes
ipam_config:
- subnet: fdd1:ac8c:0557:7ce1::/64
register: network
diff: yes
- assert:
that:
- network is changed
- network.diff.differences | length == 1
- network.diff.differences[0] == "ipam_config[0].subnet"
- name: Change subnet of network with IPv6 IPAM config
docker_network:
name: "{{ nname_ipam_2 }}"
enable_ipv6: yes
ipam_config:
- subnet: "fdd1:ac8c:0557:7ce1::"
register: network
ignore_errors: yes
- assert:
that:
- network is failed
- "network.msg == '\"fdd1:ac8c:0557:7ce1::\" is not a valid CIDR'"
- name: Cleanup network with IPv6 IPAM config
docker_network:
name: "{{ nname_ipam_2 }}"
state: absent
#################### network-ipam-3 ####################
- name: Create network with IPv6 and custom IPv4 IPAM config
docker_network:
name: "{{ nname_ipam_3 }}"
enable_ipv6: yes
ipam_config:
- subnet: 172.4.27.0/24
- subnet: fdd1:ac8c:0557:7ce2::/64
register: network
- assert:
that:
- network is changed
- name: Change subnet order of network with IPv6 and custom IPv4 IPAM config (idempotence)
docker_network:
name: "{{ nname_ipam_3 }}"
enable_ipv6: yes
ipam_config:
- subnet: fdd1:ac8c:0557:7ce2::/64
- subnet: 172.4.27.0/24
register: network
- assert:
that:
- network is not changed
- name: Remove IPv6 from network with custom IPv4 and IPv6 IPAM config (change)
docker_network:
name: "{{ nname_ipam_3 }}"
enable_ipv6: no
ipam_config:
- subnet: 172.4.27.0/24
register: network
diff: yes
- assert:
that:
- network is changed
- network.diff.differences | length == 1
- network.diff.differences[0] == "enable_ipv6"
- name: Cleanup network with IPv6 and custom IPv4 IPAM config
docker_network:
name: "{{ nname_ipam_3 }}"
state: absent
#################### network-ipam-4 ####################
- name: Create network with IPAM driver options
docker_network:
name: "{{ nname_ipam_3 }}"
ipam_driver: default
ipam_driver_options:
a: b
register: network_1
ignore_errors: yes
- name: Create network with IPAM driver options (idempotence)
docker_network:
name: "{{ nname_ipam_3 }}"
ipam_driver: default
ipam_driver_options:
a: b
diff: yes
register: network_2
ignore_errors: yes
- name: Create network with IPAM driver options (change)
docker_network:
name: "{{ nname_ipam_3 }}"
ipam_driver: default
ipam_driver_options:
a: c
diff: yes
register: network_3
ignore_errors: yes
- name: Cleanup network
docker_network:
name: "{{ nname_ipam_3 }}"
state: absent
- assert:
that:
- network_1 is changed
- network_2 is not changed
- network_3 is changed
when: docker_py_version is version('2.0.0', '>=')
- assert:
that:
- network_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in network_1.msg"
- "'Minimum version required is 2.0.0 ' in network_1.msg"
when: docker_py_version is version('2.0.0', '<')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,815 |
docker_network with multiple subnets always changes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using `docker_network` to create a network with multiple subnets, the task will delete/create the network even if it already exists with the correct subnets. Ansible fails to judge if the existing subnets are correct, probably because of the way the arrays of subnets are compared in python.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = True
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Both systems are running ArchLinux.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "deploy network namespace that can hold all IPs"
docker_network:
name: "macvlan1"
driver: "macvlan"
internal: false
driver_options:
parent: "{{ ansible_default_ipv4.alias }}"
ipam_config: "{{ macvlan_subnets }}"
```
also vars:
```
macvlan_subnets:
- gateway: 10.162.208.1
subnet: 10.162.208.0/24
- gateway: 10.162.223.1
subnet: 10.162.223.0/24
- gateway: 10.162.210.1
subnet: 10.162.210.0/24
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I was expecting to run the play 10 times and get Changed only on the first run and OK on the other 9 runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The docker network ALWAYS changes, even if the subnets are correct on the server, causing all docker containers on the network to disconnect. This will cause downtime for all the services that run on the node.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gen4 : deploy network namespace that can hold all IPs] ****************************************************************
--- before
+++ after
@@ -1,19 +1,19 @@
{
- "connected.10.162.208.129": false,
- "connected.10.162.210.161": false,
- "connected.10.162.210.169": false,
- "connected.10.162.210.170": false,
- "connected.10.162.210.171": false,
- "connected.10.162.210.172": false,
- "connected.10.162.210.173": false,
- "connected.10.162.223.72": false,
- "connected.10.162.223.73": false,
- "connected.10.162.223.74": false,
- "connected.10.162.223.75": false,
- "connected.10.162.223.76": false,
+ "connected.10.162.208.129": true,
+ "connected.10.162.210.161": true,
+ "connected.10.162.210.169": true,
+ "connected.10.162.210.170": true,
+ "connected.10.162.210.171": true,
+ "connected.10.162.210.172": true,
+ "connected.10.162.210.173": true,
+ "connected.10.162.223.72": true,
+ "connected.10.162.223.73": true,
+ "connected.10.162.223.74": true,
+ "connected.10.162.223.75": true,
+ "connected.10.162.223.76": true,
"exists": true,
- "ipam_config[0].gateway": "10.162.210.1",
- "ipam_config[0].subnet": "10.162.210.0/24",
- "ipam_config[1].gateway": "10.162.210.1",
- "ipam_config[1].subnet": "10.162.210.0/24"
+ "ipam_config[0].gateway": "10.162.208.1",
+ "ipam_config[0].subnet": "10.162.208.0/24",
+ "ipam_config[1].gateway": "10.162.223.1",
+ "ipam_config[1].subnet": "10.162.223.0/24"
}
changed: [server1337.gun1x]
```
|
https://github.com/ansible/ansible/issues/65815
|
https://github.com/ansible/ansible/pull/65839
|
30cfa92e90b3778b4edb6a8828f0b5f0e1b6356f
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
| 2019-12-13T17:51:58Z |
python
| 2019-12-29T22:16:17Z |
test/units/modules/cloud/docker/test_docker_network.py
|
"""Unit tests for docker_network."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.modules.cloud.docker.docker_network import get_ip_version
@pytest.mark.parametrize("cidr,expected", [
('192.168.0.1/16', 'ipv4'),
('192.168.0.1/24', 'ipv4'),
('192.168.0.1/32', 'ipv4'),
('fdd1:ac8c:0557:7ce2::/64', 'ipv6'),
('fdd1:ac8c:0557:7ce2::/128', 'ipv6'),
])
def test_get_ip_version_positives(cidr, expected):
assert get_ip_version(cidr) == expected
@pytest.mark.parametrize("cidr", [
'192.168.0.1',
'192.168.0.1/34',
'192.168.0.1/asd',
'fdd1:ac8c:0557:7ce2::',
])
def test_get_ip_version_negatives(cidr):
with pytest.raises(ValueError) as e:
get_ip_version(cidr)
assert '"{0}" is not a valid CIDR'.format(cidr) == str(e.value)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,811 |
docker_container with auto_remove:yes fails
|
##### SUMMARY
`docker_container` module often fails because removal of `auto_remove: yes` takes some time. I had the same problem before with shell scripts since `docker stop` returns immediately but container removal takes some time to finish. Until removal is finished, no new container with the same name can be created.
The error message is:
Error creating container: 409 Client Error: Conflict ("Conflict. The container name "/MYCONTAINER" is already in use by container "HASH". You have to remove (or rename) that container to be able to reuse that name.")
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/USER/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17rc1 (default, Oct 10 2019, 10:26:01) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
_no changes_
##### OS / ENVIRONMENT
Ubuntu 19.10
##### STEPS TO REPRODUCE
0. Run playbook to create container
0. Apply many changes to the file system
0. Run playbook again
```yaml
docker_container:
name: SOMETHING
image: SOMETHING
state: started
detach: yes
recreate: yes
autoremove: yes
```
##### EXPECTED RESULTS
Wait for removal before trying to create a new container.
##### ACTUAL RESULTS
Command fails.
```
Error creating container: 409 Client Error: Conflict ("Conflict. The container name "/MYCONTAINER" is already in use by container "HASH". You have to remove (or rename) that container to be able to reuse that name.")
```
|
https://github.com/ansible/ansible/issues/65811
|
https://github.com/ansible/ansible/pull/65854
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
|
4df5bdb11eaed6e9d23d1de0ff7f420409cc1324
| 2019-12-13T16:07:13Z |
python
| 2019-12-29T22:16:32Z |
changelogs/fragments/65854-docker_container-wait-for-removal.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,811 |
docker_container with auto_remove:yes fails
|
##### SUMMARY
`docker_container` module often fails because removal of `auto_remove: yes` takes some time. I had the same problem before with shell scripts since `docker stop` returns immediately but container removal takes some time to finish. Until removal is finished, no new container with the same name can be created.
The error message is:
Error creating container: 409 Client Error: Conflict ("Conflict. The container name "/MYCONTAINER" is already in use by container "HASH". You have to remove (or rename) that container to be able to reuse that name.")
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/USER/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17rc1 (default, Oct 10 2019, 10:26:01) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
_no changes_
##### OS / ENVIRONMENT
Ubuntu 19.10
##### STEPS TO REPRODUCE
0. Run playbook to create container
0. Apply many changes to the file system
0. Run playbook again
```yaml
docker_container:
name: SOMETHING
image: SOMETHING
state: started
detach: yes
recreate: yes
autoremove: yes
```
##### EXPECTED RESULTS
Wait for removal before trying to create a new container.
##### ACTUAL RESULTS
Command fails.
```
Error creating container: 409 Client Error: Conflict ("Conflict. The container name "/MYCONTAINER" is already in use by container "HASH". You have to remove (or rename) that container to be able to reuse that name.")
```
|
https://github.com/ansible/ansible/issues/65811
|
https://github.com/ansible/ansible/pull/65854
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
|
4df5bdb11eaed6e9d23d1de0ff7f420409cc1324
| 2019-12-13T16:07:13Z |
python
| 2019-12-29T22:16:32Z |
lib/ansible/module_utils/docker/common.py
|
#
# Copyright 2016 Red Hat | Ansible
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
import sys
from datetime import timedelta
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, env_fallback, missing_required_lib
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.parsing.convert_bool import BOOLEANS_TRUE, BOOLEANS_FALSE
HAS_DOCKER_PY = True
HAS_DOCKER_PY_2 = False
HAS_DOCKER_PY_3 = False
HAS_DOCKER_ERROR = None
try:
from requests.exceptions import SSLError
from docker import __version__ as docker_version
from docker.errors import APIError, NotFound, TLSParameterError
from docker.tls import TLSConfig
from docker import auth
if LooseVersion(docker_version) >= LooseVersion('3.0.0'):
HAS_DOCKER_PY_3 = True
from docker import APIClient as Client
elif LooseVersion(docker_version) >= LooseVersion('2.0.0'):
HAS_DOCKER_PY_2 = True
from docker import APIClient as Client
else:
from docker import Client
except ImportError as exc:
HAS_DOCKER_ERROR = str(exc)
HAS_DOCKER_PY = False
# The next 2 imports ``docker.models`` and ``docker.ssladapter`` are used
# to ensure the user does not have both ``docker`` and ``docker-py`` modules
# installed, as they utilize the same namespace are are incompatible
try:
# docker (Docker SDK for Python >= 2.0.0)
import docker.models # noqa: F401
HAS_DOCKER_MODELS = True
except ImportError:
HAS_DOCKER_MODELS = False
try:
# docker-py (Docker SDK for Python < 2.0.0)
import docker.ssladapter # noqa: F401
HAS_DOCKER_SSLADAPTER = True
except ImportError:
HAS_DOCKER_SSLADAPTER = False
try:
from requests.exceptions import RequestException
except ImportError:
# Either docker-py is no longer using requests, or docker-py isn't around either,
# or docker-py's dependency requests is missing. In any case, define an exception
# class RequestException so that our code doesn't break.
class RequestException(Exception):
pass
DEFAULT_DOCKER_HOST = 'unix://var/run/docker.sock'
DEFAULT_TLS = False
DEFAULT_TLS_VERIFY = False
DEFAULT_TLS_HOSTNAME = 'localhost'
MIN_DOCKER_VERSION = "1.8.0"
DEFAULT_TIMEOUT_SECONDS = 60
DOCKER_COMMON_ARGS = dict(
docker_host=dict(type='str', default=DEFAULT_DOCKER_HOST, fallback=(env_fallback, ['DOCKER_HOST']), aliases=['docker_url']),
tls_hostname=dict(type='str', default=DEFAULT_TLS_HOSTNAME, fallback=(env_fallback, ['DOCKER_TLS_HOSTNAME'])),
api_version=dict(type='str', default='auto', fallback=(env_fallback, ['DOCKER_API_VERSION']), aliases=['docker_api_version']),
timeout=dict(type='int', default=DEFAULT_TIMEOUT_SECONDS, fallback=(env_fallback, ['DOCKER_TIMEOUT'])),
ca_cert=dict(type='path', aliases=['tls_ca_cert', 'cacert_path']),
client_cert=dict(type='path', aliases=['tls_client_cert', 'cert_path']),
client_key=dict(type='path', aliases=['tls_client_key', 'key_path']),
ssl_version=dict(type='str', fallback=(env_fallback, ['DOCKER_SSL_VERSION'])),
tls=dict(type='bool', default=DEFAULT_TLS, fallback=(env_fallback, ['DOCKER_TLS'])),
validate_certs=dict(type='bool', default=DEFAULT_TLS_VERIFY, fallback=(env_fallback, ['DOCKER_TLS_VERIFY']), aliases=['tls_verify']),
debug=dict(type='bool', default=False)
)
DOCKER_MUTUALLY_EXCLUSIVE = []
DOCKER_REQUIRED_TOGETHER = [
['client_cert', 'client_key']
]
DEFAULT_DOCKER_REGISTRY = 'https://index.docker.io/v1/'
EMAIL_REGEX = r'[^@]+@[^@]+\.[^@]+'
BYTE_SUFFIXES = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
if not HAS_DOCKER_PY:
docker_version = None
# No Docker SDK for Python. Create a place holder client to allow
# instantiation of AnsibleModule and proper error handing
class Client(object): # noqa: F811
def __init__(self, **kwargs):
pass
class APIError(Exception): # noqa: F811
pass
class NotFound(Exception): # noqa: F811
pass
def is_image_name_id(name):
"""Check whether the given image name is in fact an image ID (hash)."""
if re.match('^sha256:[0-9a-fA-F]{64}$', name):
return True
return False
def is_valid_tag(tag, allow_empty=False):
"""Check whether the given string is a valid docker tag name."""
if not tag:
return allow_empty
# See here ("Extended description") for a definition what tags can be:
# https://docs.docker.com/engine/reference/commandline/tag/
return bool(re.match('^[a-zA-Z0-9_][a-zA-Z0-9_.-]{0,127}$', tag))
def sanitize_result(data):
"""Sanitize data object for return to Ansible.
When the data object contains types such as docker.types.containers.HostConfig,
Ansible will fail when these are returned via exit_json or fail_json.
HostConfig is derived from dict, but its constructor requires additional
arguments. This function sanitizes data structures by recursively converting
everything derived from dict to dict and everything derived from list (and tuple)
to a list.
"""
if isinstance(data, dict):
return dict((k, sanitize_result(v)) for k, v in data.items())
elif isinstance(data, (list, tuple)):
return [sanitize_result(v) for v in data]
else:
return data
class DockerBaseClass(object):
def __init__(self):
self.debug = False
def log(self, msg, pretty_print=False):
pass
# if self.debug:
# log_file = open('docker.log', 'a')
# if pretty_print:
# log_file.write(json.dumps(msg, sort_keys=True, indent=4, separators=(',', ': ')))
# log_file.write(u'\n')
# else:
# log_file.write(msg + u'\n')
def update_tls_hostname(result):
if result['tls_hostname'] is None:
# get default machine name from the url
parsed_url = urlparse(result['docker_host'])
if ':' in parsed_url.netloc:
result['tls_hostname'] = parsed_url.netloc[:parsed_url.netloc.rindex(':')]
else:
result['tls_hostname'] = parsed_url
def _get_tls_config(fail_function, **kwargs):
try:
tls_config = TLSConfig(**kwargs)
return tls_config
except TLSParameterError as exc:
fail_function("TLS config error: %s" % exc)
def get_connect_params(auth, fail_function):
if auth['tls'] or auth['tls_verify']:
auth['docker_host'] = auth['docker_host'].replace('tcp://', 'https://')
if auth['tls_verify'] and auth['cert_path'] and auth['key_path']:
# TLS with certs and host verification
if auth['cacert_path']:
tls_config = _get_tls_config(client_cert=(auth['cert_path'], auth['key_path']),
ca_cert=auth['cacert_path'],
verify=True,
assert_hostname=auth['tls_hostname'],
ssl_version=auth['ssl_version'],
fail_function=fail_function)
else:
tls_config = _get_tls_config(client_cert=(auth['cert_path'], auth['key_path']),
verify=True,
assert_hostname=auth['tls_hostname'],
ssl_version=auth['ssl_version'],
fail_function=fail_function)
return dict(base_url=auth['docker_host'],
tls=tls_config,
version=auth['api_version'],
timeout=auth['timeout'])
if auth['tls_verify'] and auth['cacert_path']:
# TLS with cacert only
tls_config = _get_tls_config(ca_cert=auth['cacert_path'],
assert_hostname=auth['tls_hostname'],
verify=True,
ssl_version=auth['ssl_version'],
fail_function=fail_function)
return dict(base_url=auth['docker_host'],
tls=tls_config,
version=auth['api_version'],
timeout=auth['timeout'])
if auth['tls_verify']:
# TLS with verify and no certs
tls_config = _get_tls_config(verify=True,
assert_hostname=auth['tls_hostname'],
ssl_version=auth['ssl_version'],
fail_function=fail_function)
return dict(base_url=auth['docker_host'],
tls=tls_config,
version=auth['api_version'],
timeout=auth['timeout'])
if auth['tls'] and auth['cert_path'] and auth['key_path']:
# TLS with certs and no host verification
tls_config = _get_tls_config(client_cert=(auth['cert_path'], auth['key_path']),
verify=False,
ssl_version=auth['ssl_version'],
fail_function=fail_function)
return dict(base_url=auth['docker_host'],
tls=tls_config,
version=auth['api_version'],
timeout=auth['timeout'])
if auth['tls']:
# TLS with no certs and not host verification
tls_config = _get_tls_config(verify=False,
ssl_version=auth['ssl_version'],
fail_function=fail_function)
return dict(base_url=auth['docker_host'],
tls=tls_config,
version=auth['api_version'],
timeout=auth['timeout'])
# No TLS
return dict(base_url=auth['docker_host'],
version=auth['api_version'],
timeout=auth['timeout'])
DOCKERPYUPGRADE_SWITCH_TO_DOCKER = "Try `pip uninstall docker-py` followed by `pip install docker`."
DOCKERPYUPGRADE_UPGRADE_DOCKER = "Use `pip install --upgrade docker` to upgrade."
DOCKERPYUPGRADE_RECOMMEND_DOCKER = ("Use `pip install --upgrade docker-py` to upgrade. "
"Hint: if you do not need Python 2.6 support, try "
"`pip uninstall docker-py` instead, followed by `pip install docker`.")
class AnsibleDockerClient(Client):
def __init__(self, argument_spec=None, supports_check_mode=False, mutually_exclusive=None,
required_together=None, required_if=None, min_docker_version=MIN_DOCKER_VERSION,
min_docker_api_version=None, option_minimal_versions=None,
option_minimal_versions_ignore_params=None, fail_results=None):
# Modules can put information in here which will always be returned
# in case client.fail() is called.
self.fail_results = fail_results or {}
merged_arg_spec = dict()
merged_arg_spec.update(DOCKER_COMMON_ARGS)
if argument_spec:
merged_arg_spec.update(argument_spec)
self.arg_spec = merged_arg_spec
mutually_exclusive_params = []
mutually_exclusive_params += DOCKER_MUTUALLY_EXCLUSIVE
if mutually_exclusive:
mutually_exclusive_params += mutually_exclusive
required_together_params = []
required_together_params += DOCKER_REQUIRED_TOGETHER
if required_together:
required_together_params += required_together
self.module = AnsibleModule(
argument_spec=merged_arg_spec,
supports_check_mode=supports_check_mode,
mutually_exclusive=mutually_exclusive_params,
required_together=required_together_params,
required_if=required_if)
NEEDS_DOCKER_PY2 = (LooseVersion(min_docker_version) >= LooseVersion('2.0.0'))
self.docker_py_version = LooseVersion(docker_version)
if HAS_DOCKER_MODELS and HAS_DOCKER_SSLADAPTER:
self.fail("Cannot have both the docker-py and docker python modules (old and new version of Docker "
"SDK for Python) installed together as they use the same namespace and cause a corrupt "
"installation. Please uninstall both packages, and re-install only the docker-py or docker "
"python module (for %s's Python %s). It is recommended to install the docker module if no "
"support for Python 2.6 is required. Please note that simply uninstalling one of the modules "
"can leave the other module in a broken state." % (platform.node(), sys.executable))
if not HAS_DOCKER_PY:
if NEEDS_DOCKER_PY2:
msg = missing_required_lib("Docker SDK for Python: docker")
msg = msg + ", for example via `pip install docker`. The error was: %s"
else:
msg = missing_required_lib("Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)")
msg = msg + ", for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: %s"
self.fail(msg % HAS_DOCKER_ERROR)
if self.docker_py_version < LooseVersion(min_docker_version):
msg = "Error: Docker SDK for Python version is %s (%s's Python %s). Minimum version required is %s."
if not NEEDS_DOCKER_PY2:
# The minimal required version is < 2.0 (and the current version as well).
# Advertise docker (instead of docker-py) for non-Python-2.6 users.
msg += DOCKERPYUPGRADE_RECOMMEND_DOCKER
elif docker_version < LooseVersion('2.0'):
msg += DOCKERPYUPGRADE_SWITCH_TO_DOCKER
else:
msg += DOCKERPYUPGRADE_UPGRADE_DOCKER
self.fail(msg % (docker_version, platform.node(), sys.executable, min_docker_version))
self.debug = self.module.params.get('debug')
self.check_mode = self.module.check_mode
self._connect_params = get_connect_params(self.auth_params, fail_function=self.fail)
try:
super(AnsibleDockerClient, self).__init__(**self._connect_params)
self.docker_api_version_str = self.version()['ApiVersion']
except APIError as exc:
self.fail("Docker API error: %s" % exc)
except Exception as exc:
self.fail("Error connecting: %s" % exc)
self.docker_api_version = LooseVersion(self.docker_api_version_str)
if min_docker_api_version is not None:
if self.docker_api_version < LooseVersion(min_docker_api_version):
self.fail('Docker API version is %s. Minimum version required is %s.' % (self.docker_api_version_str, min_docker_api_version))
if option_minimal_versions is not None:
self._get_minimal_versions(option_minimal_versions, option_minimal_versions_ignore_params)
def log(self, msg, pretty_print=False):
pass
# if self.debug:
# log_file = open('docker.log', 'a')
# if pretty_print:
# log_file.write(json.dumps(msg, sort_keys=True, indent=4, separators=(',', ': ')))
# log_file.write(u'\n')
# else:
# log_file.write(msg + u'\n')
def fail(self, msg, **kwargs):
self.fail_results.update(kwargs)
self.module.fail_json(msg=msg, **sanitize_result(self.fail_results))
@staticmethod
def _get_value(param_name, param_value, env_variable, default_value):
if param_value is not None:
# take module parameter value
if param_value in BOOLEANS_TRUE:
return True
if param_value in BOOLEANS_FALSE:
return False
return param_value
if env_variable is not None:
env_value = os.environ.get(env_variable)
if env_value is not None:
# take the env variable value
if param_name == 'cert_path':
return os.path.join(env_value, 'cert.pem')
if param_name == 'cacert_path':
return os.path.join(env_value, 'ca.pem')
if param_name == 'key_path':
return os.path.join(env_value, 'key.pem')
if env_value in BOOLEANS_TRUE:
return True
if env_value in BOOLEANS_FALSE:
return False
return env_value
# take the default
return default_value
@property
def auth_params(self):
# Get authentication credentials.
# Precedence: module parameters-> environment variables-> defaults.
self.log('Getting credentials')
params = dict()
for key in DOCKER_COMMON_ARGS:
params[key] = self.module.params.get(key)
if self.module.params.get('use_tls'):
# support use_tls option in docker_image.py. This will be deprecated.
use_tls = self.module.params.get('use_tls')
if use_tls == 'encrypt':
params['tls'] = True
if use_tls == 'verify':
params['validate_certs'] = True
result = dict(
docker_host=self._get_value('docker_host', params['docker_host'], 'DOCKER_HOST',
DEFAULT_DOCKER_HOST),
tls_hostname=self._get_value('tls_hostname', params['tls_hostname'],
'DOCKER_TLS_HOSTNAME', DEFAULT_TLS_HOSTNAME),
api_version=self._get_value('api_version', params['api_version'], 'DOCKER_API_VERSION',
'auto'),
cacert_path=self._get_value('cacert_path', params['ca_cert'], 'DOCKER_CERT_PATH', None),
cert_path=self._get_value('cert_path', params['client_cert'], 'DOCKER_CERT_PATH', None),
key_path=self._get_value('key_path', params['client_key'], 'DOCKER_CERT_PATH', None),
ssl_version=self._get_value('ssl_version', params['ssl_version'], 'DOCKER_SSL_VERSION', None),
tls=self._get_value('tls', params['tls'], 'DOCKER_TLS', DEFAULT_TLS),
tls_verify=self._get_value('tls_verfy', params['validate_certs'], 'DOCKER_TLS_VERIFY',
DEFAULT_TLS_VERIFY),
timeout=self._get_value('timeout', params['timeout'], 'DOCKER_TIMEOUT',
DEFAULT_TIMEOUT_SECONDS),
)
update_tls_hostname(result)
return result
def _handle_ssl_error(self, error):
match = re.match(r"hostname.*doesn\'t match (\'.*\')", str(error))
if match:
self.fail("You asked for verification that Docker daemons certificate's hostname matches %s. "
"The actual certificate's hostname is %s. Most likely you need to set DOCKER_TLS_HOSTNAME "
"or pass `tls_hostname` with a value of %s. You may also use TLS without verification by "
"setting the `tls` parameter to true."
% (self.auth_params['tls_hostname'], match.group(1), match.group(1)))
self.fail("SSL Exception: %s" % (error))
def _get_minimal_versions(self, option_minimal_versions, ignore_params=None):
self.option_minimal_versions = dict()
for option in self.module.argument_spec:
if ignore_params is not None:
if option in ignore_params:
continue
self.option_minimal_versions[option] = dict()
self.option_minimal_versions.update(option_minimal_versions)
for option, data in self.option_minimal_versions.items():
# Test whether option is supported, and store result
support_docker_py = True
support_docker_api = True
if 'docker_py_version' in data:
support_docker_py = self.docker_py_version >= LooseVersion(data['docker_py_version'])
if 'docker_api_version' in data:
support_docker_api = self.docker_api_version >= LooseVersion(data['docker_api_version'])
data['supported'] = support_docker_py and support_docker_api
# Fail if option is not supported but used
if not data['supported']:
# Test whether option is specified
if 'detect_usage' in data:
used = data['detect_usage'](self)
else:
used = self.module.params.get(option) is not None
if used and 'default' in self.module.argument_spec[option]:
used = self.module.params[option] != self.module.argument_spec[option]['default']
if used:
# If the option is used, compose error message.
if 'usage_msg' in data:
usg = data['usage_msg']
else:
usg = 'set %s option' % (option, )
if not support_docker_api:
msg = 'Docker API version is %s. Minimum version required is %s to %s.'
msg = msg % (self.docker_api_version_str, data['docker_api_version'], usg)
elif not support_docker_py:
msg = "Docker SDK for Python version is %s (%s's Python %s). Minimum version required is %s to %s. "
if LooseVersion(data['docker_py_version']) < LooseVersion('2.0.0'):
msg += DOCKERPYUPGRADE_RECOMMEND_DOCKER
elif self.docker_py_version < LooseVersion('2.0.0'):
msg += DOCKERPYUPGRADE_SWITCH_TO_DOCKER
else:
msg += DOCKERPYUPGRADE_UPGRADE_DOCKER
msg = msg % (docker_version, platform.node(), sys.executable, data['docker_py_version'], usg)
else:
# should not happen
msg = 'Cannot %s with your configuration.' % (usg, )
self.fail(msg)
def get_container(self, name=None):
'''
Lookup a container and return the inspection results.
'''
if name is None:
return None
search_name = name
if not name.startswith('/'):
search_name = '/' + name
result = None
try:
for container in self.containers(all=True):
self.log("testing container: %s" % (container['Names']))
if isinstance(container['Names'], list) and search_name in container['Names']:
result = container
break
if container['Id'].startswith(name):
result = container
break
if container['Id'] == name:
result = container
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc:
self.fail("Error retrieving container list: %s" % exc)
if result is not None:
try:
self.log("Inspecting container Id %s" % result['Id'])
result = self.inspect_container(container=result['Id'])
self.log("Completed container inspection")
except NotFound as dummy:
return None
except Exception as exc:
self.fail("Error inspecting container: %s" % exc)
return result
def get_network(self, name=None, network_id=None):
'''
Lookup a network and return the inspection results.
'''
if name is None and network_id is None:
return None
result = None
if network_id is None:
try:
for network in self.networks():
self.log("testing network: %s" % (network['Name']))
if name == network['Name']:
result = network
break
if network['Id'].startswith(name):
result = network
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc:
self.fail("Error retrieving network list: %s" % exc)
if result is not None:
network_id = result['Id']
if network_id is not None:
try:
self.log("Inspecting network Id %s" % network_id)
result = self.inspect_network(network_id)
self.log("Completed network inspection")
except NotFound as dummy:
return None
except Exception as exc:
self.fail("Error inspecting network: %s" % exc)
return result
def find_image(self, name, tag):
'''
Lookup an image (by name and tag) and return the inspection results.
'''
if not name:
return None
self.log("Find image %s:%s" % (name, tag))
images = self._image_lookup(name, tag)
if not images:
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
registry, repo_name = auth.resolve_repository_name(name)
if registry == 'docker.io':
# If docker.io is explicitly there in name, the image
# isn't found in some cases (#41509)
self.log("Check for docker.io image: %s" % repo_name)
images = self._image_lookup(repo_name, tag)
if not images and repo_name.startswith('library/'):
# Sometimes library/xxx images are not found
lookup = repo_name[len('library/'):]
self.log("Check for docker.io image: %s" % lookup)
images = self._image_lookup(lookup, tag)
if not images:
# Last case: if docker.io wasn't there, it can be that
# the image wasn't found either (#15586)
lookup = "%s/%s" % (registry, repo_name)
self.log("Check for docker.io image: %s" % lookup)
images = self._image_lookup(lookup, tag)
if len(images) > 1:
self.fail("Registry returned more than one result for %s:%s" % (name, tag))
if len(images) == 1:
try:
inspection = self.inspect_image(images[0]['Id'])
except Exception as exc:
self.fail("Error inspecting image %s:%s - %s" % (name, tag, str(exc)))
return inspection
self.log("Image %s:%s not found." % (name, tag))
return None
def find_image_by_id(self, image_id):
'''
Lookup an image (by ID) and return the inspection results.
'''
if not image_id:
return None
self.log("Find image %s (by ID)" % image_id)
try:
inspection = self.inspect_image(image_id)
except Exception as exc:
self.fail("Error inspecting image ID %s - %s" % (image_id, str(exc)))
return inspection
def _image_lookup(self, name, tag):
'''
Including a tag in the name parameter sent to the Docker SDK for Python images method
does not work consistently. Instead, get the result set for name and manually check
if the tag exists.
'''
try:
response = self.images(name=name)
except Exception as exc:
self.fail("Error searching for image %s - %s" % (name, str(exc)))
images = response
if tag:
lookup = "%s:%s" % (name, tag)
lookup_digest = "%s@%s" % (name, tag)
images = []
for image in response:
tags = image.get('RepoTags')
digests = image.get('RepoDigests')
if (tags and lookup in tags) or (digests and lookup_digest in digests):
images = [image]
break
return images
def pull_image(self, name, tag="latest"):
'''
Pull an image
'''
self.log("Pulling image %s:%s" % (name, tag))
old_tag = self.find_image(name, tag)
try:
for line in self.pull(name, tag=tag, stream=True, decode=True):
self.log(line, pretty_print=True)
if line.get('error'):
if line.get('errorDetail'):
error_detail = line.get('errorDetail')
self.fail("Error pulling %s - code: %s message: %s" % (name,
error_detail.get('code'),
error_detail.get('message')))
else:
self.fail("Error pulling %s - %s" % (name, line.get('error')))
except Exception as exc:
self.fail("Error pulling image %s:%s - %s" % (name, tag, str(exc)))
new_tag = self.find_image(name, tag)
return new_tag, old_tag == new_tag
def report_warnings(self, result, warnings_key=None):
'''
Checks result of client operation for warnings, and if present, outputs them.
warnings_key should be a list of keys used to crawl the result dictionary.
For example, if warnings_key == ['a', 'b'], the function will consider
result['a']['b'] if these keys exist. If the result is a non-empty string, it
will be reported as a warning. If the result is a list, every entry will be
reported as a warning.
In most cases (if warnings are returned at all), warnings_key should be
['Warnings'] or ['Warning']. The default value (if not specified) is ['Warnings'].
'''
if warnings_key is None:
warnings_key = ['Warnings']
for key in warnings_key:
if not isinstance(result, Mapping):
return
result = result.get(key)
if isinstance(result, Sequence):
for warning in result:
self.module.warn('Docker warning: {0}'.format(warning))
elif isinstance(result, string_types) and result:
self.module.warn('Docker warning: {0}'.format(result))
def inspect_distribution(self, image, **kwargs):
'''
Get image digest by directly calling the Docker API when running Docker SDK < 4.0.0
since prior versions did not support accessing private repositories.
'''
if self.docker_py_version < LooseVersion('4.0.0'):
registry = auth.resolve_repository_name(image)[0]
header = auth.get_config_header(self, registry)
if header:
return self._result(self._get(
self._url('/distribution/{0}/json', image),
headers={'X-Registry-Auth': header}
), json=True)
return super(AnsibleDockerClient, self).inspect_distribution(image, **kwargs)
def compare_dict_allow_more_present(av, bv):
'''
Compare two dictionaries for whether every entry of the first is in the second.
'''
for key, value in av.items():
if key not in bv:
return False
if bv[key] != value:
return False
return True
def compare_generic(a, b, method, datatype):
'''
Compare values a and b as described by method and datatype.
Returns ``True`` if the values compare equal, and ``False`` if not.
``a`` is usually the module's parameter, while ``b`` is a property
of the current object. ``a`` must not be ``None`` (except for
``datatype == 'value'``).
Valid values for ``method`` are:
- ``ignore`` (always compare as equal);
- ``strict`` (only compare if really equal)
- ``allow_more_present`` (allow b to have elements which a does not have).
Valid values for ``datatype`` are:
- ``value``: for simple values (strings, numbers, ...);
- ``list``: for ``list``s or ``tuple``s where order matters;
- ``set``: for ``list``s, ``tuple``s or ``set``s where order does not
matter;
- ``set(dict)``: for ``list``s, ``tuple``s or ``sets`` where order does
not matter and which contain ``dict``s; ``allow_more_present`` is used
for the ``dict``s, and these are assumed to be dictionaries of values;
- ``dict``: for dictionaries of values.
'''
if method == 'ignore':
return True
# If a or b is None:
if a is None or b is None:
# If both are None: equality
if a == b:
return True
# Otherwise, not equal for values, and equal
# if the other is empty for set/list/dict
if datatype == 'value':
return False
# For allow_more_present, allow a to be None
if method == 'allow_more_present' and a is None:
return True
# Otherwise, the iterable object which is not None must have length 0
return len(b if a is None else a) == 0
# Do proper comparison (both objects not None)
if datatype == 'value':
return a == b
elif datatype == 'list':
if method == 'strict':
return a == b
else:
i = 0
for v in a:
while i < len(b) and b[i] != v:
i += 1
if i == len(b):
return False
i += 1
return True
elif datatype == 'dict':
if method == 'strict':
return a == b
else:
return compare_dict_allow_more_present(a, b)
elif datatype == 'set':
set_a = set(a)
set_b = set(b)
if method == 'strict':
return set_a == set_b
else:
return set_b >= set_a
elif datatype == 'set(dict)':
for av in a:
found = False
for bv in b:
if compare_dict_allow_more_present(av, bv):
found = True
break
if not found:
return False
if method == 'strict':
# If we would know that both a and b do not contain duplicates,
# we could simply compare len(a) to len(b) to finish this test.
# We can assume that b has no duplicates (as it is returned by
# docker), but we don't know for a.
for bv in b:
found = False
for av in a:
if compare_dict_allow_more_present(av, bv):
found = True
break
if not found:
return False
return True
class DifferenceTracker(object):
def __init__(self):
self._diff = []
def add(self, name, parameter=None, active=None):
self._diff.append(dict(
name=name,
parameter=parameter,
active=active,
))
def merge(self, other_tracker):
self._diff.extend(other_tracker._diff)
@property
def empty(self):
return len(self._diff) == 0
def get_before_after(self):
'''
Return texts ``before`` and ``after``.
'''
before = dict()
after = dict()
for item in self._diff:
before[item['name']] = item['active']
after[item['name']] = item['parameter']
return before, after
def has_difference_for(self, name):
'''
Returns a boolean if a difference exists for name
'''
return any(diff for diff in self._diff if diff['name'] == name)
def get_legacy_docker_container_diffs(self):
'''
Return differences in the docker_container legacy format.
'''
result = []
for entry in self._diff:
item = dict()
item[entry['name']] = dict(
parameter=entry['parameter'],
container=entry['active'],
)
result.append(item)
return result
def get_legacy_docker_diffs(self):
'''
Return differences in the docker_container legacy format.
'''
result = [entry['name'] for entry in self._diff]
return result
def clean_dict_booleans_for_docker_api(data):
'''
Go doesn't like Python booleans 'True' or 'False', while Ansible is just
fine with them in YAML. As such, they need to be converted in cases where
we pass dictionaries to the Docker API (e.g. docker_network's
driver_options and docker_prune's filters).
'''
result = dict()
if data is not None:
for k, v in data.items():
if v is True:
v = 'true'
elif v is False:
v = 'false'
else:
v = str(v)
result[str(k)] = v
return result
def convert_duration_to_nanosecond(time_str):
"""
Return time duration in nanosecond.
"""
if not isinstance(time_str, str):
raise ValueError('Missing unit in duration - %s' % time_str)
regex = re.compile(
r'^(((?P<hours>\d+)h)?'
r'((?P<minutes>\d+)m(?!s))?'
r'((?P<seconds>\d+)s)?'
r'((?P<milliseconds>\d+)ms)?'
r'((?P<microseconds>\d+)us)?)$'
)
parts = regex.match(time_str)
if not parts:
raise ValueError('Invalid time duration - %s' % time_str)
parts = parts.groupdict()
time_params = {}
for (name, value) in parts.items():
if value:
time_params[name] = int(value)
delta = timedelta(**time_params)
time_in_nanoseconds = (
delta.microseconds + (delta.seconds + delta.days * 24 * 3600) * 10 ** 6
) * 10 ** 3
return time_in_nanoseconds
def parse_healthcheck(healthcheck):
"""
Return dictionary of healthcheck parameters and boolean if
healthcheck defined in image was requested to be disabled.
"""
if (not healthcheck) or (not healthcheck.get('test')):
return None, None
result = dict()
# All supported healthcheck parameters
options = dict(
test='test',
interval='interval',
timeout='timeout',
start_period='start_period',
retries='retries'
)
duration_options = ['interval', 'timeout', 'start_period']
for (key, value) in options.items():
if value in healthcheck:
if healthcheck.get(value) is None:
# due to recursive argument_spec, all keys are always present
# (but have default value None if not specified)
continue
if value in duration_options:
time = convert_duration_to_nanosecond(healthcheck.get(value))
if time:
result[key] = time
elif healthcheck.get(value):
result[key] = healthcheck.get(value)
if key == 'test':
if isinstance(result[key], (tuple, list)):
result[key] = [str(e) for e in result[key]]
else:
result[key] = ['CMD-SHELL', str(result[key])]
elif key == 'retries':
try:
result[key] = int(result[key])
except ValueError:
raise ValueError(
'Cannot parse number of retries for healthcheck. '
'Expected an integer, got "{0}".'.format(result[key])
)
if result['test'] == ['NONE']:
# If the user explicitly disables the healthcheck, return None
# as the healthcheck object, and set disable_healthcheck to True
return None, True
return result, False
def omit_none_from_dict(d):
"""
Return a copy of the dictionary with all keys with value None omitted.
"""
return dict((k, v) for (k, v) in d.items() if v is not None)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,811 |
docker_container with auto_remove:yes fails
|
##### SUMMARY
`docker_container` module often fails because removal of `auto_remove: yes` takes some time. I had the same problem before with shell scripts since `docker stop` returns immediately but container removal takes some time to finish. Until removal is finished, no new container with the same name can be created.
The error message is:
Error creating container: 409 Client Error: Conflict ("Conflict. The container name "/MYCONTAINER" is already in use by container "HASH". You have to remove (or rename) that container to be able to reuse that name.")
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/USER/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17rc1 (default, Oct 10 2019, 10:26:01) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
_no changes_
##### OS / ENVIRONMENT
Ubuntu 19.10
##### STEPS TO REPRODUCE
0. Run playbook to create container
0. Apply many changes to the file system
0. Run playbook again
```yaml
docker_container:
name: SOMETHING
image: SOMETHING
state: started
detach: yes
recreate: yes
autoremove: yes
```
##### EXPECTED RESULTS
Wait for removal before trying to create a new container.
##### ACTUAL RESULTS
Command fails.
```
Error creating container: 409 Client Error: Conflict ("Conflict. The container name "/MYCONTAINER" is already in use by container "HASH". You have to remove (or rename) that container to be able to reuse that name.")
```
|
https://github.com/ansible/ansible/issues/65811
|
https://github.com/ansible/ansible/pull/65854
|
17ef253ad15fa6a02e1e22f5847aba85b60e151f
|
4df5bdb11eaed6e9d23d1de0ff7f420409cc1324
| 2019-12-13T16:07:13Z |
python
| 2019-12-29T22:16:32Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with C(--check) and C(--diff) to view config difference and list of actions to be taken.
version_added: "2.1"
notes:
- For most config changes, the container needs to be recreated, i.e. the existing container has to be destroyed and
a new one created. This can cause unexpected data loss and downtime. You can use the I(comparisons) option to
prevent this.
- If the module needs to recreate the container, it will only use the options provided to the module to create the
new container (except I(image)). Therefore, always specify *all* options relevant to the container.
- When I(restart) is set to C(true), the module will only restart the container if no config changes are detected.
Please note that several options have default values; if the container to be restarted uses different values for
these options, it will be recreated instead. The options with default values which can cause this are I(auto_remove),
I(detach), I(init), I(interactive), I(memory), I(paused), I(privileged), I(read_only) and I(tty). This behavior
can be changed by setting I(container_default_behavior) to C(no_defaults), which will be the default value from
Ansible 2.14 on.
options:
auto_remove:
description:
- Enable auto-removal of the container on daemon side when the container's process exits.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
elements: str
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
elements: str
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts. A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not.
- Only options which correspond to the state of a container as handled by the
Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to *all* comparisons which are not explicitly set to other values.
- See the examples for details.
type: dict
version_added: "2.8"
container_default_behavior:
description:
- Various module options used to have default values. This causes problems with
containers which use different values for these options.
- The default value is C(compatibility), which will ensure that the default values
are used when the values are not explicitly specified by the user.
- From Ansible 2.14 on, the default value will switch to C(no_defaults). To avoid
deprecation warnings, please set I(container_default_behavior) to an explicit
value.
- This affects the I(auto_remove), I(detach), I(init), I(interactive), I(memory),
I(paused), I(privileged), I(read_only) and I(tty) options.
type: str
choices:
- compatibility
- no_defaults
version_added: "2.10"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period.
- See I(cpus) for an easier to use alternative.
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota.
- See I(cpus) for an easier to use alternative.
type: int
cpus:
description:
- Specify how much of the available CPU resources a container can use.
- A value of C(1.5) means that at most one and a half CPU (core) will be used.
type: float
version_added: '2.10'
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1).
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
- If disabled, the task will reflect the status of the container run (failed if the command failed).
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(yes).
type: bool
devices:
description:
- List of host device bindings to add to the container.
- "Each binding is a mapping expressed in the format C(<path_on_host>:<path_in_container>:<cgroup_permissions>)."
type: list
elements: str
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- List of DNS options.
type: list
elements: str
dns_servers:
description:
- List of custom DNS servers.
type: list
elements: str
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
elements: str
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in I(env), then the I(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default C(ENTRYPOINT) of the image.
type: list
elements: str
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's C(/etc/hosts) file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
- If the port is already exposed using C(EXPOSE) in a Dockerfile, it does not
need to be exposed again.
type: list
elements: str
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
elements: str
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
- "See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work."
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
- The default used by the Docker daemon is C(30s).
type: str
timeout:
description:
- Maximum time to allow one check to run.
- The default used by the Docker daemon is C(30s).
type: str
retries:
description:
- Consecutive number of failures needed to report unhealthy.
- The default used by the Docker daemon is C(3).
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
- The default used by the Docker daemon is C(0s).
type: str
version_added: "2.8"
hostname:
description:
- The container's hostname.
type: str
ignore_image:
description:
- When I(state) is C(present) or C(started), the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If the image
version in the registry does not match the container, the container will be recreated. You can
stop this behavior by setting I(ignore_image) to C(True).
- "*Warning:* This option is ignored if C(image: ignore) or C(*: ignore) is specified in the
I(comparisons) option."
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The I(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
- This option requires Docker API >= 1.25.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ipc_mode:
description:
- Set the IPC mode for the container.
- Can be one of C(container:<name|id>) to reuse another container's IPC namespace or C(host) to use
the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
elements: str
log_driver:
description:
- Specify the logging driver. Docker uses C(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen I(log_driver).
- See U(https://docs.docker.com/engine/admin/logging/overview/) for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33).
type: str
memory:
description:
- "Memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C("0").
type: str
memory_reservation:
description:
- "Memory soft limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap) in format C(<number>[<unit>]).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited
from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
elements: dict
description:
- Specification for mounts to be added to the container. More powerful alternative to I(volumes).
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- bind
- npipe
- tmpfs
- volume
default: volume
read_only:
description:
- Whether the mount should be read-only.
type: bool
consistency:
description:
- The consistency requirement for the mount.
type: str
choices:
- cached
- consistent
- default
- delegated
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- private
- rprivate
- shared
- rshared
- slave
- rslave
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See
L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes in format <number>[<unit>]."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are C(bridge), C(host), C(none), C(container:<name|id>), C(<network_name>) or C(default).
- "*Note* that from Ansible 2.14 on, if I(networks_cli_compatible) is C(true) and I(networks) contains at least one network,
the default value for I(network_mode) will be the name of the first network in the I(networks) list. You can prevent this
by explicitly specifying a value for I(network_mode), like the default value C(default) which will be used by Docker if
I(network_mode) is not specified."
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value are C(host) and the empty string.
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the I(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if I(networks) is specified. You need to explicitly use I(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in I(networks)).
Alternatively, use the I(networks_cli_compatible) option, which will be enabled by default from Ansible 2.12 on.
type: list
elements: dict
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
elements: str
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
elements: str
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with I(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will *not* add the default network if I(networks) is
specified. If I(networks) is not specified, the default network will be attached."
- "*Note* that docker CLI also sets I(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set I(network_mode) to the name of the first network you're
adding. This behavior will change for Ansible 2.14: then I(network_mode) will
automatically be set to the first network name in I(networks) if I(network_mode)
is not specified, I(networks) has at least one entry and I(networks_cli_compatible)
is C(true)."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed.
- Only effective when I(log_driver) is set to C(json-file) or C(journald).
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports C(host). Newer versions of the
Docker SDK for Python (docker) allow all values supported by the Docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set C(-1) for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are *not* allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If I(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter C(com.docker.network.bridge.host_binding_ipv4).
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by C(com.docker.network.bridge.host_binding_ipv4).
Note that the first bridge network with a C(com.docker.network.bridge.host_binding_ipv4)
value encountered in the list of I(networks) is the one that will be used.
type: list
elements: str
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- "*Note:* images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled."
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in I(networks) parameter.
- Any default networks such as C(bridge), if not found in I(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy.
- Place quotes around C(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) in format C(<number>[<unit>]). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, Docker daemon uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User").
type: list
elements: str
state:
description:
- 'C(absent) - A container matching the specified name will be stopped and removed. Use I(force_kill) to kill the container
rather than stopping it. Use I(keep_volumes) to retain volumes associated with the removed container.'
- 'C(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config.'
- 'C(started) - Asserts that the container is first C(present), and then if the container is not running moves it to a running
state. Use I(restart) to force a matching container to be stopped and restarted.'
- 'C(stopped) - Asserts that the container is first C(present), and then if the container is running moves it to a stopped
state.'
- To control what will be taken into account when comparing configuration, see the I(comparisons) option. To avoid that the
image version will be taken into account, you can also use the I(ignore_image) option.
- Use the I(recreate) option to always force re-creation of a matching container, even if it is running.
- If the container should be killed instead of stopped in case it needs to be stopped for recreation, or because I(state) is
C(stopped), please use the I(force_kill) option. Use I(keep_volumes) to retain volumes associated with a removed container.
- Use I(keep_volumes) to retain volumes associated with a removed container.
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending C(SIGKILL).
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
- The option has never been used by the module. It will be removed in Ansible 2.14.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory.
type: list
elements: str
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)."
type: list
elements: str
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be of the forms C(user), C(user:group), C(uid), C(uid:gid), C(user:gid) or C(uid:group)."
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
elements: str
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or IDs to get volumes from.
type: list
elements: str
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if I(state) is C(absent)
- If I(detached) is C(false), will include C(Output) attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpus = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.cpus is not None:
self.cpus = int(round(self.cpus * 1E9))
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, dummy = vol.split(':')
result.append(container)
continue
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
result.append(parts[1])
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
nano_cpus='cpus',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (vol.split(':') + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
restart_policy = host_config.get('RestartPolicy', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
restart_policy=restart_policy.get('Name'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
cpus=host_config.get('NanoCpus'),
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
)
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
network_info = connected_networks.get(network['name'])
if network_info is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
network_info_ipam = network_info.get('IPAMConfig') or {}
if network.get('ipv4_address') and network['ipv4_address'] != network_info_ipam.get('IPv4Address'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != network_info_ipam.get('IPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], network_info.get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, network_info.get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=network_info_ipam.get('IPv4Address'),
ipv6_address=network_info_ipam.get('IPv6Address'),
aliases=network_info.get('Aliases'),
links=network_info.get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
container = None
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
dummy, container, mode = vol.split(':') + ['rw']
new_vol = dict()
if container:
new_vol[container] = dict()
else:
new_vol[vol] = dict()
expected_vols.update(new_vol)
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists:
# New container
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and container.paused is not None and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not image or self.parameters.pull:
if not self.check_mode:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
elif not image:
# If the image isn't there, claim we'll pull.
# (Implicitly: if the image is there, claim it already was latest.)
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if self.parameters.detach is False:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
cpus=dict(docker_py_version='2.3.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', docker_api_version='1.22',
detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
if self.module.params['container_default_behavior'] is None:
self.module.params['container_default_behavior'] = 'compatibility'
self.module.deprecate(
'The container_default_behavior option will change its default value from "compatibility" to '
'"no_defaults" in Ansible 2.14. To remove this warning, please specify an explicit value for it now',
version='2.14'
)
if self.module.params['container_default_behavior'] == 'compatibility':
old_default_values = dict(
auto_remove=False,
detach=True,
init=False,
interactive=False,
memory="0",
paused=False,
privileged=False,
read_only=False,
tty=False,
)
for param, value in old_default_values.items():
if self.module.params[param] is None:
self.module.params[param] = value
def main():
argument_spec = dict(
auto_remove=dict(type='bool'),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
container_default_behavior=dict(type='str', choices=['compatibility', 'no_defaults']),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpus=dict(type='float'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool'),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool'),
interactive=dict(type='bool'),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool'),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool'),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool'),
recreate=dict(type='bool', default=False),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False, removed_in_version='2.14'),
tty=dict(type='bool'),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
if client.module.params['networks_cli_compatible'] is True and client.module.params['networks'] and client.module.params['network_mode'] is None:
client.module.deprecate(
'Please note that the default value for `network_mode` will change from not specified '
'(which is equal to `default`) to the name of the first network in `networks` if '
'`networks` has at least one entry and `networks_cli_compatible` is `true`. You can '
'change the behavior now by explicitly setting `network_mode` to the name of the first '
'network in `networks`, and remove this warning by setting `network_mode` to `default`. '
'Please make sure that the value you set to `network_mode` equals the inspection result '
'for existing containers, otherwise the module will recreate them. You can find out the '
'correct value by running "docker inspect --format \'{{.HostConfig.NetworkMode}}\' <container_name>"',
version='2.14'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,146 |
docker_swarm_service healthcheck always changed
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using `healthcheck` in `docker_swarm_service`, it is always reported as having changed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_swarm_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /mnt/f/Users/bozho/.ansible.cfg
configured module search path = ['/mnt/f/Users/bozho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 27 2019, 15:43:29) [GCC 9.2.1 20191022]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_CONTROL_PATH_DIR(/mnt/f/Users/bozho/.ansible.cfg) = /tmp/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian Bullseye on WSL:
```
Linux bozho-host 4.4.0-18362-Microsoft #476-Microsoft Fri Nov 01 16:53:00 PST 2019 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've added `healthcheck` option to an existing `docker_swarm_service` entry that was behaving as expected starting a service from a custom nginx image in global mode
<!--- Paste example playbooks or commands between quotes below -->
```yaml
docker_swarm_service:
name: "custom-nginx"
image: "{{ nginx_image_name }}"
labels:
tag: "my-tag"
configs:
- config_id: "{{ nginx_conf_config.config_id }}"
config_name: "{{ docker_nginx_conf_config_name }}"
filename: "/etc/nginx/nginx.conf"
mode: 0644
- config_id: "{{ default_conf_config.config_id }}"
config_name: "{{ docker_default_conf_config_name }}"
filename: "/etc/nginx/sites-enabled/default.conf"
mode: 0644
secrets: "{{ nginx_secrets_list | default([]) }}"
mode: global
update_config:
delay: 1s
failure_action: rollback
order: stop-first
healthcheck:
test: [ "CMD", "curl", "--fail", "-k", "https://myhost.example.com" ]
interval: 1m30s
timeout: 5s
retries: 3
start_period: 20s
logging:
driver: local
options:
"max-size": 10k
"max-file": "3"
publish:
- published_port: 80
target_port: 80
protocol: tcp
mode: host
- published_port: 443
target_port: 443
protocol: tcp
mode: host
networks:
- my_network
run_once: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Re-running the playbook should have no changes
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
{
"changed": true,
"changes": [
"healthcheck"
],
"msg": "Service updated",
"rebuilt": false,
"swarm_service": {
"args": null,
"command": null,
"configs": [
...
],
...
}
```
|
https://github.com/ansible/ansible/issues/66146
|
https://github.com/ansible/ansible/pull/66151
|
f7d412144616f538f2b82a40f2698934d374f7c1
|
f31b8e08b24f1c19c156caf5137ca18159f32df0
| 2019-12-31T16:01:14Z |
python
| 2020-01-02T14:23:36Z |
changelogs/fragments/66151-docker_swarm_service-healthcheck-start-period.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,146 |
docker_swarm_service healthcheck always changed
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using `healthcheck` in `docker_swarm_service`, it is always reported as having changed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_swarm_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /mnt/f/Users/bozho/.ansible.cfg
configured module search path = ['/mnt/f/Users/bozho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 27 2019, 15:43:29) [GCC 9.2.1 20191022]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_CONTROL_PATH_DIR(/mnt/f/Users/bozho/.ansible.cfg) = /tmp/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian Bullseye on WSL:
```
Linux bozho-host 4.4.0-18362-Microsoft #476-Microsoft Fri Nov 01 16:53:00 PST 2019 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've added `healthcheck` option to an existing `docker_swarm_service` entry that was behaving as expected starting a service from a custom nginx image in global mode
<!--- Paste example playbooks or commands between quotes below -->
```yaml
docker_swarm_service:
name: "custom-nginx"
image: "{{ nginx_image_name }}"
labels:
tag: "my-tag"
configs:
- config_id: "{{ nginx_conf_config.config_id }}"
config_name: "{{ docker_nginx_conf_config_name }}"
filename: "/etc/nginx/nginx.conf"
mode: 0644
- config_id: "{{ default_conf_config.config_id }}"
config_name: "{{ docker_default_conf_config_name }}"
filename: "/etc/nginx/sites-enabled/default.conf"
mode: 0644
secrets: "{{ nginx_secrets_list | default([]) }}"
mode: global
update_config:
delay: 1s
failure_action: rollback
order: stop-first
healthcheck:
test: [ "CMD", "curl", "--fail", "-k", "https://myhost.example.com" ]
interval: 1m30s
timeout: 5s
retries: 3
start_period: 20s
logging:
driver: local
options:
"max-size": 10k
"max-file": "3"
publish:
- published_port: 80
target_port: 80
protocol: tcp
mode: host
- published_port: 443
target_port: 443
protocol: tcp
mode: host
networks:
- my_network
run_once: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Re-running the playbook should have no changes
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
{
"changed": true,
"changes": [
"healthcheck"
],
"msg": "Service updated",
"rebuilt": false,
"swarm_service": {
"args": null,
"command": null,
"configs": [
...
],
...
}
```
|
https://github.com/ansible/ansible/issues/66146
|
https://github.com/ansible/ansible/pull/66151
|
f7d412144616f538f2b82a40f2698934d374f7c1
|
f31b8e08b24f1c19c156caf5137ca18159f32df0
| 2019-12-31T16:01:14Z |
python
| 2020-01-02T14:23:36Z |
lib/ansible/modules/cloud/docker/docker_swarm_service.py
|
#!/usr/bin/python
#
# (c) 2017, Dario Zanzico ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: docker_swarm_service
author:
- "Dario Zanzico (@dariko)"
- "Jason Witkowski (@jwitko)"
- "Hannes Ljungberg (@hannseman)"
short_description: docker swarm service
description:
- Manages docker services via a swarm manager node.
version_added: "2.7"
options:
args:
description:
- List arguments to be passed to the container.
- Corresponds to the C(ARG) parameter of C(docker service create).
type: list
elements: str
command:
description:
- Command to execute when the container starts.
- A command may be either a string or a list or a list of strings.
- Corresponds to the C(COMMAND) parameter of C(docker service create).
type: raw
version_added: 2.8
configs:
description:
- List of dictionaries describing the service configs.
- Corresponds to the C(--config) option of C(docker service create).
- Requires API version >= 1.30.
type: list
elements: dict
suboptions:
config_id:
description:
- Config's ID.
type: str
config_name:
description:
- Config's name as defined at its creation.
type: str
required: yes
filename:
description:
- Name of the file containing the config. Defaults to the I(config_name) if not specified.
type: str
uid:
description:
- UID of the config file's owner.
type: str
gid:
description:
- GID of the config file's group.
type: str
mode:
description:
- File access mode inside the container. Must be an octal number (like C(0644) or C(0444)).
type: int
constraints:
description:
- List of the service constraints.
- Corresponds to the C(--constraint) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(placement.constraints) instead.
type: list
elements: str
container_labels:
description:
- Dictionary of key value pairs.
- Corresponds to the C(--container-label) option of C(docker service create).
type: dict
dns:
description:
- List of custom DNS servers.
- Corresponds to the C(--dns) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
dns_search:
description:
- List of custom DNS search domains.
- Corresponds to the C(--dns-search) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
dns_options:
description:
- List of custom DNS options.
- Corresponds to the C(--dns-option) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
endpoint_mode:
description:
- Service endpoint mode.
- Corresponds to the C(--endpoint-mode) option of C(docker service create).
- Requires API version >= 1.25.
type: str
choices:
- vip
- dnsrr
env:
description:
- List or dictionary of the service environment variables.
- If passed a list each items need to be in the format of C(KEY=VALUE).
- If passed a dictionary values which might be parsed as numbers,
booleans or other types by the YAML parser must be quoted (e.g. C("true"))
in order to avoid data loss.
- Corresponds to the C(--env) option of C(docker service create).
type: raw
env_files:
description:
- List of paths to files, present on the target, containing environment variables C(FOO=BAR).
- The order of the list is significant in determining the value assigned to a
variable that shows up more than once.
- If variable also present in I(env), then I(env) value will override.
type: list
elements: path
version_added: "2.8"
force_update:
description:
- Force update even if no changes require it.
- Corresponds to the C(--force) option of C(docker service update).
- Requires API version >= 1.25.
type: bool
default: no
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
- Corresponds to the C(--group) option of C(docker service update).
- Requires API version >= 1.25.
type: list
elements: str
version_added: "2.8"
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Requires API version >= 1.25.
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
type: str
timeout:
description:
- Maximum time to allow one check to run.
type: str
retries:
description:
- Consecutive failures needed to report unhealthy. It accept integer value.
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
- Corresponds to the C(--hostname) option of C(docker service create).
- Requires API version >= 1.25.
type: str
hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
- Corresponds to the C(--host) option of C(docker service create).
- Requires API version >= 1.25.
type: dict
version_added: "2.8"
image:
description:
- Service image path and tag.
- Corresponds to the C(IMAGE) parameter of C(docker service create).
type: str
labels:
description:
- Dictionary of key value pairs.
- Corresponds to the C(--label) option of C(docker service create).
type: dict
limits:
description:
- Configures service resource limits.
suboptions:
cpus:
description:
- Service CPU limit. C(0) equals no limit.
- Corresponds to the C(--limit-cpu) option of C(docker service create).
type: float
memory:
description:
- "Service memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no limit.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--limit-memory) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
limit_cpu:
description:
- Service CPU limit. C(0) equals no limit.
- Corresponds to the C(--limit-cpu) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(limits.cpus) instead.
type: float
limit_memory:
description:
- "Service memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no limit.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--limit-memory) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(limits.memory) instead.
type: str
logging:
description:
- "Logging configuration for the service."
suboptions:
driver:
description:
- Configure the logging driver for a service.
- Corresponds to the C(--log-driver) option of C(docker service create).
type: str
options:
description:
- Options for service logging driver.
- Corresponds to the C(--log-opt) option of C(docker service create).
type: dict
type: dict
version_added: "2.8"
log_driver:
description:
- Configure the logging driver for a service.
- Corresponds to the C(--log-driver) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(logging.driver) instead.
type: str
log_driver_options:
description:
- Options for service logging driver.
- Corresponds to the C(--log-opt) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(logging.options) instead.
type: dict
mode:
description:
- Service replication mode.
- Service will be removed and recreated when changed.
- Corresponds to the C(--mode) option of C(docker service create).
type: str
default: replicated
choices:
- replicated
- global
mounts:
description:
- List of dictionaries describing the service mounts.
- Corresponds to the C(--mount) option of C(docker service create).
type: list
elements: dict
suboptions:
source:
description:
- Mount source (e.g. a volume name or a host path).
- Must be specified if I(type) is not C(tmpfs).
type: str
target:
description:
- Container path.
type: str
required: yes
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows. Also note that C(npipe) was added in Ansible 2.9.
type: str
default: bind
choices:
- bind
- volume
- tmpfs
- npipe
readonly:
description:
- Whether the mount should be read-only.
type: bool
labels:
description:
- Volume labels to apply.
type: dict
version_added: "2.8"
propagation:
description:
- The propagation mode to use.
- Can only be used when I(mode) is C(bind).
type: str
choices:
- shared
- slave
- private
- rshared
- rslave
- rprivate
version_added: "2.8"
no_copy:
description:
- Disable copying of data from a container when a volume is created.
- Can only be used when I(mode) is C(volume).
type: bool
version_added: "2.8"
driver_config:
description:
- Volume driver configuration.
- Can only be used when I(mode) is C(volume).
suboptions:
name:
description:
- Name of the volume-driver plugin to use for the volume.
type: str
options:
description:
- Options as key-value pairs to pass to the driver for this volume.
type: dict
type: dict
version_added: "2.8"
tmpfs_size:
description:
- "Size of the tmpfs mount in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Can only be used when I(mode) is C(tmpfs).
type: str
version_added: "2.8"
tmpfs_mode:
description:
- File mode of the tmpfs in octal.
- Can only be used when I(mode) is C(tmpfs).
type: int
version_added: "2.8"
name:
description:
- Service name.
- Corresponds to the C(--name) option of C(docker service create).
type: str
required: yes
networks:
description:
- List of the service networks names or dictionaries.
- When passed dictionaries valid sub-options are I(name), which is required, and
I(aliases) and I(options).
- Prior to API version 1.29, updating and removing networks is not supported.
If changes are made the service will then be removed and recreated.
- Corresponds to the C(--network) option of C(docker service create).
type: list
elements: raw
placement:
description:
- Configures service placement preferences and constraints.
suboptions:
constraints:
description:
- List of the service constraints.
- Corresponds to the C(--constraint) option of C(docker service create).
type: list
elements: str
preferences:
description:
- List of the placement preferences as key value pairs.
- Corresponds to the C(--placement-pref) option of C(docker service create).
- Requires API version >= 1.27.
type: list
elements: dict
type: dict
version_added: "2.8"
publish:
description:
- List of dictionaries describing the service published ports.
- Corresponds to the C(--publish) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: dict
suboptions:
published_port:
description:
- The port to make externally available.
type: int
required: yes
target_port:
description:
- The port inside the container to expose.
type: int
required: yes
protocol:
description:
- What protocol to use.
type: str
default: tcp
choices:
- tcp
- udp
mode:
description:
- What publish mode to use.
- Requires API version >= 1.32.
type: str
choices:
- ingress
- host
read_only:
description:
- Mount the containers root filesystem as read only.
- Corresponds to the C(--read-only) option of C(docker service create).
type: bool
version_added: "2.8"
replicas:
description:
- Number of containers instantiated in the service. Valid only if I(mode) is C(replicated).
- If set to C(-1), and service is not present, service replicas will be set to C(1).
- If set to C(-1), and service is present, service replicas will be unchanged.
- Corresponds to the C(--replicas) option of C(docker service create).
type: int
default: -1
reservations:
description:
- Configures service resource reservations.
suboptions:
cpus:
description:
- Service CPU reservation. C(0) equals no reservation.
- Corresponds to the C(--reserve-cpu) option of C(docker service create).
type: float
memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
reserve_cpu:
description:
- Service CPU reservation. C(0) equals no reservation.
- Corresponds to the C(--reserve-cpu) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(reservations.cpus) instead.
type: float
reserve_memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(reservations.memory) instead.
type: str
resolve_image:
description:
- If the current image digest should be resolved from registry and updated if changed.
- Requires API version >= 1.30.
type: bool
default: no
version_added: 2.8
restart_config:
description:
- Configures if and how to restart containers when they exit.
suboptions:
condition:
description:
- Restart condition of the service.
- Corresponds to the C(--restart-condition) option of C(docker service create).
type: str
choices:
- none
- on-failure
- any
delay:
description:
- Delay between restarts.
- "Accepts a a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-delay) option of C(docker service create).
type: str
max_attempts:
description:
- Maximum number of service restarts.
- Corresponds to the C(--restart-condition) option of C(docker service create).
type: int
window:
description:
- Restart policy evaluation window.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-window) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
restart_policy:
description:
- Restart condition of the service.
- Corresponds to the C(--restart-condition) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.condition) instead.
type: str
choices:
- none
- on-failure
- any
restart_policy_attempts:
description:
- Maximum number of service restarts.
- Corresponds to the C(--restart-condition) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.max_attempts) instead.
type: int
restart_policy_delay:
description:
- Delay between restarts.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-delay) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.delay) instead.
type: raw
restart_policy_window:
description:
- Restart policy evaluation window.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-window) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.window) instead.
type: raw
rollback_config:
description:
- Configures how the service should be rolled back in case of a failing update.
suboptions:
parallelism:
description:
- The number of containers to rollback at a time. If set to 0, all containers rollback simultaneously.
- Corresponds to the C(--rollback-parallelism) option of C(docker service create).
- Requires API version >= 1.28.
type: int
delay:
description:
- Delay between task rollbacks.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--rollback-delay) option of C(docker service create).
- Requires API version >= 1.28.
type: str
failure_action:
description:
- Action to take in case of rollback failure.
- Corresponds to the C(--rollback-failure-action) option of C(docker service create).
- Requires API version >= 1.28.
type: str
choices:
- continue
- pause
monitor:
description:
- Duration after each task rollback to monitor for failure.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--rollback-monitor) option of C(docker service create).
- Requires API version >= 1.28.
type: str
max_failure_ratio:
description:
- Fraction of tasks that may fail during a rollback.
- Corresponds to the C(--rollback-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.28.
type: float
order:
description:
- Specifies the order of operations during rollbacks.
- Corresponds to the C(--rollback-order) option of C(docker service create).
- Requires API version >= 1.29.
type: str
type: dict
version_added: "2.8"
secrets:
description:
- List of dictionaries describing the service secrets.
- Corresponds to the C(--secret) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: dict
suboptions:
secret_id:
description:
- Secret's ID.
type: str
secret_name:
description:
- Secret's name as defined at its creation.
type: str
required: yes
filename:
description:
- Name of the file containing the secret. Defaults to the I(secret_name) if not specified.
- Corresponds to the C(target) key of C(docker service create --secret).
type: str
uid:
description:
- UID of the secret file's owner.
type: str
gid:
description:
- GID of the secret file's group.
type: str
mode:
description:
- File access mode inside the container. Must be an octal number (like C(0644) or C(0444)).
type: int
state:
description:
- C(absent) - A service matching the specified name will be removed and have its tasks stopped.
- C(present) - Asserts the existence of a service matching the name and provided configuration parameters.
Unspecified configuration parameters will be set to docker defaults.
type: str
default: present
choices:
- present
- absent
stop_grace_period:
description:
- Time to wait before force killing a container.
- "Accepts a duration as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--stop-grace-period) option of C(docker service create).
type: str
version_added: "2.8"
stop_signal:
description:
- Override default signal used to stop the container.
- Corresponds to the C(--stop-signal) option of C(docker service create).
type: str
version_added: "2.8"
tty:
description:
- Allocate a pseudo-TTY.
- Corresponds to the C(--tty) option of C(docker service create).
- Requires API version >= 1.25.
type: bool
update_config:
description:
- Configures how the service should be updated. Useful for configuring rolling updates.
suboptions:
parallelism:
description:
- Rolling update parallelism.
- Corresponds to the C(--update-parallelism) option of C(docker service create).
type: int
delay:
description:
- Rolling update delay.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-delay) option of C(docker service create).
type: str
failure_action:
description:
- Action to take in case of container failure.
- Corresponds to the C(--update-failure-action) option of C(docker service create).
- Usage of I(rollback) requires API version >= 1.29.
type: str
choices:
- continue
- pause
- rollback
monitor:
description:
- Time to monitor updated tasks for failures.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-monitor) option of C(docker service create).
- Requires API version >= 1.25.
type: str
max_failure_ratio:
description:
- Fraction of tasks that may fail during an update before the failure action is invoked.
- Corresponds to the C(--update-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.25.
type: float
order:
description:
- Specifies the order of operations when rolling out an updated task.
- Corresponds to the C(--update-order) option of C(docker service create).
- Requires API version >= 1.29.
type: str
type: dict
version_added: "2.8"
update_delay:
description:
- Rolling update delay.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-delay) option of C(docker service create).
- Before Ansible 2.8, the default value for this option was C(10).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.delay) instead.
type: raw
update_parallelism:
description:
- Rolling update parallelism.
- Corresponds to the C(--update-parallelism) option of C(docker service create).
- Before Ansible 2.8, the default value for this option was C(1).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.parallelism) instead.
type: int
update_failure_action:
description:
- Action to take in case of container failure.
- Corresponds to the C(--update-failure-action) option of C(docker service create).
- Usage of I(rollback) requires API version >= 1.29.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.failure_action) instead.
type: str
choices:
- continue
- pause
- rollback
update_monitor:
description:
- Time to monitor updated tasks for failures.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-monitor) option of C(docker service create).
- Requires API version >= 1.25.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.monitor) instead.
type: raw
update_max_failure_ratio:
description:
- Fraction of tasks that may fail during an update before the failure action is invoked.
- Corresponds to the C(--update-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.25.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.max_failure_ratio) instead.
type: float
update_order:
description:
- Specifies the order of operations when rolling out an updated task.
- Corresponds to the C(--update-order) option of C(docker service create).
- Requires API version >= 1.29.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.order) instead.
type: str
choices:
- stop-first
- start-first
user:
description:
- Sets the username or UID used for the specified command.
- Before Ansible 2.8, the default value for this option was C(root).
- The default has been removed so that the user defined in the image is used if no user is specified here.
- Corresponds to the C(--user) option of C(docker service create).
type: str
working_dir:
description:
- Path to the working directory.
- Corresponds to the C(--workdir) option of C(docker service create).
type: str
version_added: "2.8"
extends_documentation_fragment:
- docker
- docker.docker_py_2_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 2.0.2"
- "Docker API >= 1.24"
notes:
- "Images will only resolve to the latest digest when using Docker API >= 1.30 and Docker SDK for Python >= 3.2.0.
When using older versions use C(force_update: true) to trigger the swarm to resolve a new image."
'''
RETURN = '''
swarm_service:
returned: always
type: dict
description:
- Dictionary of variables representing the current state of the service.
Matches the module parameters format.
- Note that facts are not part of registered vars but accessible directly.
- Note that before Ansible 2.7.9, the return variable was documented as C(ansible_swarm_service),
while the module actually returned a variable called C(ansible_docker_service). The variable
was renamed to C(swarm_service) in both code and documentation for Ansible 2.7.9 and Ansible 2.8.0.
In Ansible 2.7.x, the old name C(ansible_docker_service) can still be used.
sample: '{
"args": [
"3600"
],
"command": [
"sleep"
],
"configs": null,
"constraints": [
"node.role == manager",
"engine.labels.operatingsystem == ubuntu 14.04"
],
"container_labels": null,
"dns": null,
"dns_options": null,
"dns_search": null,
"endpoint_mode": null,
"env": [
"ENVVAR1=envvar1",
"ENVVAR2=envvar2"
],
"force_update": null,
"groups": null,
"healthcheck": {
"interval": 90000000000,
"retries": 3,
"start_period": 30000000000,
"test": [
"CMD",
"curl",
"--fail",
"http://nginx.host.com"
],
"timeout": 10000000000
},
"healthcheck_disabled": false,
"hostname": null,
"hosts": null,
"image": "alpine:latest@sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8",
"labels": {
"com.example.department": "Finance",
"com.example.description": "Accounting webapp"
},
"limit_cpu": 0.5,
"limit_memory": 52428800,
"log_driver": "fluentd",
"log_driver_options": {
"fluentd-address": "127.0.0.1:24224",
"fluentd-async-connect": "true",
"tag": "myservice"
},
"mode": "replicated",
"mounts": [
{
"readonly": false,
"source": "/tmp/",
"target": "/remote_tmp/",
"type": "bind",
"labels": null,
"propagation": null,
"no_copy": null,
"driver_config": null,
"tmpfs_size": null,
"tmpfs_mode": null
}
],
"networks": null,
"placement_preferences": [
{
"spread": "node.labels.mylabel"
}
],
"publish": null,
"read_only": null,
"replicas": 1,
"reserve_cpu": 0.25,
"reserve_memory": 20971520,
"restart_policy": "on-failure",
"restart_policy_attempts": 3,
"restart_policy_delay": 5000000000,
"restart_policy_window": 120000000000,
"secrets": null,
"stop_grace_period": null,
"stop_signal": null,
"tty": null,
"update_delay": 10000000000,
"update_failure_action": null,
"update_max_failure_ratio": null,
"update_monitor": null,
"update_order": "stop-first",
"update_parallelism": 2,
"user": null,
"working_dir": null
}'
changes:
returned: always
description:
- List of changed service attributes if a service has been altered, [] otherwise.
type: list
elements: str
sample: ['container_labels', 'replicas']
rebuilt:
returned: always
description:
- True if the service has been recreated (removed and created)
type: bool
sample: True
'''
EXAMPLES = '''
- name: Set command and arguments
docker_swarm_service:
name: myservice
image: alpine
command: sleep
args:
- "3600"
- name: Set a bind mount
docker_swarm_service:
name: myservice
image: alpine
mounts:
- source: /tmp/
target: /remote_tmp/
type: bind
- name: Set service labels
docker_swarm_service:
name: myservice
image: alpine
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
- name: Set environment variables
docker_swarm_service:
name: myservice
image: alpine
env:
ENVVAR1: envvar1
ENVVAR2: envvar2
env_files:
- envs/common.env
- envs/apps/web.env
- name: Set fluentd logging
docker_swarm_service:
name: myservice
image: alpine
logging:
driver: fluentd
options:
fluentd-address: "127.0.0.1:24224"
fluentd-async-connect: "true"
tag: myservice
- name: Set restart policies
docker_swarm_service:
name: myservice
image: alpine
restart_config:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
- name: Set update config
docker_swarm_service:
name: myservice
image: alpine
update_config:
parallelism: 2
delay: 10s
order: stop-first
- name: Set rollback config
docker_swarm_service:
name: myservice
image: alpine
update_config:
failure_action: rollback
rollback_config:
parallelism: 2
delay: 10s
order: stop-first
- name: Set placement preferences
docker_swarm_service:
name: myservice
image: alpine:edge
placement:
preferences:
- spread: node.labels.mylabel
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04
- name: Set configs
docker_swarm_service:
name: myservice
image: alpine:edge
configs:
- config_name: myconfig_name
filename: "/tmp/config.txt"
- name: Set networks
docker_swarm_service:
name: myservice
image: alpine:edge
networks:
- mynetwork
- name: Set networks as a dictionary
docker_swarm_service:
name: myservice
image: alpine:edge
networks:
- name: "mynetwork"
aliases:
- "mynetwork_alias"
options:
foo: bar
- name: Set secrets
docker_swarm_service:
name: myservice
image: alpine:edge
secrets:
- secret_name: mysecret_name
filename: "/run/secrets/secret.txt"
- name: Start service with healthcheck
docker_swarm_service:
name: myservice
image: nginx:1.13
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Configure service resources
docker_swarm_service:
name: myservice
image: alpine:edge
reservations:
cpus: 0.25
memory: 20M
limits:
cpus: 0.50
memory: 50M
- name: Remove service
docker_swarm_service:
name: myservice
state: absent
'''
import shlex
import time
import operator
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
convert_duration_to_nanosecond,
parse_healthcheck,
clean_dict_booleans_for_docker_api,
RequestException,
)
from ansible.module_utils.basic import human_to_bytes
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
try:
from docker import types
from docker.utils import (
parse_repository_tag,
parse_env_file,
format_environment,
)
from docker.errors import (
APIError,
DockerException,
NotFound,
)
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
def get_docker_environment(env, env_files):
"""
Will return a list of "KEY=VALUE" items. Supplied env variable can
be either a list or a dictionary.
If environment files are combined with explicit environment variables,
the explicit environment variables take precedence.
"""
env_dict = {}
if env_files:
for env_file in env_files:
parsed_env_file = parse_env_file(env_file)
for name, value in parsed_env_file.items():
env_dict[name] = str(value)
if env is not None and isinstance(env, string_types):
env = env.split(',')
if env is not None and isinstance(env, dict):
for name, value in env.items():
if not isinstance(value, string_types):
raise ValueError(
'Non-string value found for env option. '
'Ambiguous env options must be wrapped in quotes to avoid YAML parsing. Key: %s' % name
)
env_dict[name] = str(value)
elif env is not None and isinstance(env, list):
for item in env:
try:
name, value = item.split('=', 1)
except ValueError:
raise ValueError('Invalid environment variable found in list, needs to be in format KEY=VALUE.')
env_dict[name] = value
elif env is not None:
raise ValueError(
'Invalid type for env %s (%s). Only list or dict allowed.' % (env, type(env))
)
env_list = format_environment(env_dict)
if not env_list:
if env is not None or env_files is not None:
return []
else:
return None
return sorted(env_list)
def get_docker_networks(networks, network_ids):
"""
Validate a list of network names or a list of network dictionaries.
Network names will be resolved to ids by using the network_ids mapping.
"""
if networks is None:
return None
parsed_networks = []
for network in networks:
if isinstance(network, string_types):
parsed_network = {'name': network}
elif isinstance(network, dict):
if 'name' not in network:
raise TypeError(
'"name" is required when networks are passed as dictionaries.'
)
name = network.pop('name')
parsed_network = {'name': name}
aliases = network.pop('aliases', None)
if aliases is not None:
if not isinstance(aliases, list):
raise TypeError('"aliases" network option is only allowed as a list')
if not all(
isinstance(alias, string_types) for alias in aliases
):
raise TypeError('Only strings are allowed as network aliases.')
parsed_network['aliases'] = aliases
options = network.pop('options', None)
if options is not None:
if not isinstance(options, dict):
raise TypeError('Only dict is allowed as network options.')
parsed_network['options'] = clean_dict_booleans_for_docker_api(options)
# Check if any invalid keys left
if network:
invalid_keys = ', '.join(network.keys())
raise TypeError(
'%s are not valid keys for the networks option' % invalid_keys
)
else:
raise TypeError(
'Only a list of strings or dictionaries are allowed to be passed as networks.'
)
network_name = parsed_network.pop('name')
try:
parsed_network['id'] = network_ids[network_name]
except KeyError as e:
raise ValueError('Could not find a network named: %s.' % e)
parsed_networks.append(parsed_network)
return parsed_networks or []
def get_nanoseconds_from_raw_option(name, value):
if value is None:
return None
elif isinstance(value, int):
return value
elif isinstance(value, string_types):
try:
return int(value)
except ValueError:
return convert_duration_to_nanosecond(value)
else:
raise ValueError(
'Invalid type for %s %s (%s). Only string or int allowed.'
% (name, value, type(value))
)
def get_value(key, values, default=None):
value = values.get(key)
return value if value is not None else default
def has_dict_changed(new_dict, old_dict):
"""
Check if new_dict has differences compared to old_dict while
ignoring keys in old_dict which are None in new_dict.
"""
if new_dict is None:
return False
if not new_dict and old_dict:
return True
if not old_dict and new_dict:
return True
defined_options = dict(
(option, value) for option, value in new_dict.items()
if value is not None
)
for option, value in defined_options.items():
old_value = old_dict.get(option)
if not value and not old_value:
continue
if value != old_value:
return True
return False
def has_list_changed(new_list, old_list, sort_lists=True, sort_key=None):
"""
Check two lists have differences. Sort lists by default.
"""
def sort_list(unsorted_list):
"""
Sort a given list.
The list may contain dictionaries, so use the sort key to handle them.
"""
if unsorted_list and isinstance(unsorted_list[0], dict):
if not sort_key:
raise Exception(
'A sort key was not specified when sorting list'
)
else:
return sorted(unsorted_list, key=lambda k: k[sort_key])
# Either the list is empty or does not contain dictionaries
try:
return sorted(unsorted_list)
except TypeError:
return unsorted_list
if new_list is None:
return False
old_list = old_list or []
if len(new_list) != len(old_list):
return True
if sort_lists:
zip_data = zip(sort_list(new_list), sort_list(old_list))
else:
zip_data = zip(new_list, old_list)
for new_item, old_item in zip_data:
is_same_type = type(new_item) == type(old_item)
if not is_same_type:
if isinstance(new_item, string_types) and isinstance(old_item, string_types):
# Even though the types are different between these items,
# they are both strings. Try matching on the same string type.
try:
new_item_type = type(new_item)
old_item_casted = new_item_type(old_item)
if new_item != old_item_casted:
return True
else:
continue
except UnicodeEncodeError:
# Fallback to assuming the strings are different
return True
else:
return True
if isinstance(new_item, dict):
if has_dict_changed(new_item, old_item):
return True
elif new_item != old_item:
return True
return False
def have_networks_changed(new_networks, old_networks):
"""Special case list checking for networks to sort aliases"""
if new_networks is None:
return False
old_networks = old_networks or []
if len(new_networks) != len(old_networks):
return True
zip_data = zip(
sorted(new_networks, key=lambda k: k['id']),
sorted(old_networks, key=lambda k: k['id'])
)
for new_item, old_item in zip_data:
new_item = dict(new_item)
old_item = dict(old_item)
# Sort the aliases
if 'aliases' in new_item:
new_item['aliases'] = sorted(new_item['aliases'] or [])
if 'aliases' in old_item:
old_item['aliases'] = sorted(old_item['aliases'] or [])
if has_dict_changed(new_item, old_item):
return True
return False
class DockerService(DockerBaseClass):
def __init__(self, docker_api_version, docker_py_version):
super(DockerService, self).__init__()
self.image = ""
self.command = None
self.args = None
self.endpoint_mode = None
self.dns = None
self.healthcheck = None
self.healthcheck_disabled = None
self.hostname = None
self.hosts = None
self.tty = None
self.dns_search = None
self.dns_options = None
self.env = None
self.force_update = None
self.groups = None
self.log_driver = None
self.log_driver_options = None
self.labels = None
self.container_labels = None
self.limit_cpu = None
self.limit_memory = None
self.reserve_cpu = None
self.reserve_memory = None
self.mode = "replicated"
self.user = None
self.mounts = None
self.configs = None
self.secrets = None
self.constraints = None
self.networks = None
self.stop_grace_period = None
self.stop_signal = None
self.publish = None
self.placement_preferences = None
self.replicas = -1
self.service_id = False
self.service_version = False
self.read_only = None
self.restart_policy = None
self.restart_policy_attempts = None
self.restart_policy_delay = None
self.restart_policy_window = None
self.rollback_config = None
self.update_delay = None
self.update_parallelism = None
self.update_failure_action = None
self.update_monitor = None
self.update_max_failure_ratio = None
self.update_order = None
self.working_dir = None
self.docker_api_version = docker_api_version
self.docker_py_version = docker_py_version
def get_facts(self):
return {
'image': self.image,
'mounts': self.mounts,
'configs': self.configs,
'networks': self.networks,
'command': self.command,
'args': self.args,
'tty': self.tty,
'dns': self.dns,
'dns_search': self.dns_search,
'dns_options': self.dns_options,
'healthcheck': self.healthcheck,
'healthcheck_disabled': self.healthcheck_disabled,
'hostname': self.hostname,
'hosts': self.hosts,
'env': self.env,
'force_update': self.force_update,
'groups': self.groups,
'log_driver': self.log_driver,
'log_driver_options': self.log_driver_options,
'publish': self.publish,
'constraints': self.constraints,
'placement_preferences': self.placement_preferences,
'labels': self.labels,
'container_labels': self.container_labels,
'mode': self.mode,
'replicas': self.replicas,
'endpoint_mode': self.endpoint_mode,
'restart_policy': self.restart_policy,
'secrets': self.secrets,
'stop_grace_period': self.stop_grace_period,
'stop_signal': self.stop_signal,
'limit_cpu': self.limit_cpu,
'limit_memory': self.limit_memory,
'read_only': self.read_only,
'reserve_cpu': self.reserve_cpu,
'reserve_memory': self.reserve_memory,
'restart_policy_delay': self.restart_policy_delay,
'restart_policy_attempts': self.restart_policy_attempts,
'restart_policy_window': self.restart_policy_window,
'rollback_config': self.rollback_config,
'update_delay': self.update_delay,
'update_parallelism': self.update_parallelism,
'update_failure_action': self.update_failure_action,
'update_monitor': self.update_monitor,
'update_max_failure_ratio': self.update_max_failure_ratio,
'update_order': self.update_order,
'user': self.user,
'working_dir': self.working_dir,
}
@property
def can_update_networks(self):
# Before Docker API 1.29 adding/removing networks was not supported
return (
self.docker_api_version >= LooseVersion('1.29') and
self.docker_py_version >= LooseVersion('2.7')
)
@property
def can_use_task_template_networks(self):
# In Docker API 1.25 attaching networks to TaskTemplate is preferred over Spec
return (
self.docker_api_version >= LooseVersion('1.25') and
self.docker_py_version >= LooseVersion('2.7')
)
@staticmethod
def get_restart_config_from_ansible_params(params):
restart_config = params['restart_config'] or {}
condition = get_value(
'condition',
restart_config,
default=params['restart_policy']
)
delay = get_value(
'delay',
restart_config,
default=params['restart_policy_delay']
)
delay = get_nanoseconds_from_raw_option(
'restart_policy_delay',
delay
)
max_attempts = get_value(
'max_attempts',
restart_config,
default=params['restart_policy_attempts']
)
window = get_value(
'window',
restart_config,
default=params['restart_policy_window']
)
window = get_nanoseconds_from_raw_option(
'restart_policy_window',
window
)
return {
'restart_policy': condition,
'restart_policy_delay': delay,
'restart_policy_attempts': max_attempts,
'restart_policy_window': window
}
@staticmethod
def get_update_config_from_ansible_params(params):
update_config = params['update_config'] or {}
parallelism = get_value(
'parallelism',
update_config,
default=params['update_parallelism']
)
delay = get_value(
'delay',
update_config,
default=params['update_delay']
)
delay = get_nanoseconds_from_raw_option(
'update_delay',
delay
)
failure_action = get_value(
'failure_action',
update_config,
default=params['update_failure_action']
)
monitor = get_value(
'monitor',
update_config,
default=params['update_monitor']
)
monitor = get_nanoseconds_from_raw_option(
'update_monitor',
monitor
)
max_failure_ratio = get_value(
'max_failure_ratio',
update_config,
default=params['update_max_failure_ratio']
)
order = get_value(
'order',
update_config,
default=params['update_order']
)
return {
'update_parallelism': parallelism,
'update_delay': delay,
'update_failure_action': failure_action,
'update_monitor': monitor,
'update_max_failure_ratio': max_failure_ratio,
'update_order': order
}
@staticmethod
def get_rollback_config_from_ansible_params(params):
if params['rollback_config'] is None:
return None
rollback_config = params['rollback_config'] or {}
delay = get_nanoseconds_from_raw_option(
'rollback_config.delay',
rollback_config.get('delay')
)
monitor = get_nanoseconds_from_raw_option(
'rollback_config.monitor',
rollback_config.get('monitor')
)
return {
'parallelism': rollback_config.get('parallelism'),
'delay': delay,
'failure_action': rollback_config.get('failure_action'),
'monitor': monitor,
'max_failure_ratio': rollback_config.get('max_failure_ratio'),
'order': rollback_config.get('order'),
}
@staticmethod
def get_logging_from_ansible_params(params):
logging_config = params['logging'] or {}
driver = get_value(
'driver',
logging_config,
default=params['log_driver']
)
options = get_value(
'options',
logging_config,
default=params['log_driver_options']
)
return {
'log_driver': driver,
'log_driver_options': options,
}
@staticmethod
def get_limits_from_ansible_params(params):
limits = params['limits'] or {}
cpus = get_value(
'cpus',
limits,
default=params['limit_cpu']
)
memory = get_value(
'memory',
limits,
default=params['limit_memory']
)
if memory is not None:
try:
memory = human_to_bytes(memory)
except ValueError as exc:
raise Exception('Failed to convert limit_memory to bytes: %s' % exc)
return {
'limit_cpu': cpus,
'limit_memory': memory,
}
@staticmethod
def get_reservations_from_ansible_params(params):
reservations = params['reservations'] or {}
cpus = get_value(
'cpus',
reservations,
default=params['reserve_cpu']
)
memory = get_value(
'memory',
reservations,
default=params['reserve_memory']
)
if memory is not None:
try:
memory = human_to_bytes(memory)
except ValueError as exc:
raise Exception('Failed to convert reserve_memory to bytes: %s' % exc)
return {
'reserve_cpu': cpus,
'reserve_memory': memory,
}
@staticmethod
def get_placement_from_ansible_params(params):
placement = params['placement'] or {}
constraints = get_value(
'constraints',
placement,
default=params['constraints']
)
preferences = placement.get('preferences')
return {
'constraints': constraints,
'placement_preferences': preferences,
}
@classmethod
def from_ansible_params(
cls,
ap,
old_service,
image_digest,
secret_ids,
config_ids,
network_ids,
docker_api_version,
docker_py_version,
):
s = DockerService(docker_api_version, docker_py_version)
s.image = image_digest
s.args = ap['args']
s.endpoint_mode = ap['endpoint_mode']
s.dns = ap['dns']
s.dns_search = ap['dns_search']
s.dns_options = ap['dns_options']
s.healthcheck, s.healthcheck_disabled = parse_healthcheck(ap['healthcheck'])
s.hostname = ap['hostname']
s.hosts = ap['hosts']
s.tty = ap['tty']
s.labels = ap['labels']
s.container_labels = ap['container_labels']
s.mode = ap['mode']
s.stop_signal = ap['stop_signal']
s.user = ap['user']
s.working_dir = ap['working_dir']
s.read_only = ap['read_only']
s.networks = get_docker_networks(ap['networks'], network_ids)
s.command = ap['command']
if isinstance(s.command, string_types):
s.command = shlex.split(s.command)
elif isinstance(s.command, list):
invalid_items = [
(index, item)
for index, item in enumerate(s.command)
if not isinstance(item, string_types)
]
if invalid_items:
errors = ', '.join(
[
'%s (%s) at index %s' % (item, type(item), index)
for index, item in invalid_items
]
)
raise Exception(
'All items in a command list need to be strings. '
'Check quoting. Invalid items: %s.'
% errors
)
s.command = ap['command']
elif s.command is not None:
raise ValueError(
'Invalid type for command %s (%s). '
'Only string or list allowed. Check quoting.'
% (s.command, type(s.command))
)
s.env = get_docker_environment(ap['env'], ap['env_files'])
s.rollback_config = cls.get_rollback_config_from_ansible_params(ap)
update_config = cls.get_update_config_from_ansible_params(ap)
for key, value in update_config.items():
setattr(s, key, value)
restart_config = cls.get_restart_config_from_ansible_params(ap)
for key, value in restart_config.items():
setattr(s, key, value)
logging_config = cls.get_logging_from_ansible_params(ap)
for key, value in logging_config.items():
setattr(s, key, value)
limits = cls.get_limits_from_ansible_params(ap)
for key, value in limits.items():
setattr(s, key, value)
reservations = cls.get_reservations_from_ansible_params(ap)
for key, value in reservations.items():
setattr(s, key, value)
placement = cls.get_placement_from_ansible_params(ap)
for key, value in placement.items():
setattr(s, key, value)
if ap['stop_grace_period'] is not None:
s.stop_grace_period = convert_duration_to_nanosecond(ap['stop_grace_period'])
if ap['force_update']:
s.force_update = int(str(time.time()).replace('.', ''))
if ap['groups'] is not None:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
s.groups = [str(g) for g in ap['groups']]
if ap['replicas'] == -1:
if old_service:
s.replicas = old_service.replicas
else:
s.replicas = 1
else:
s.replicas = ap['replicas']
if ap['publish'] is not None:
s.publish = []
for param_p in ap['publish']:
service_p = {}
service_p['protocol'] = param_p['protocol']
service_p['mode'] = param_p['mode']
service_p['published_port'] = param_p['published_port']
service_p['target_port'] = param_p['target_port']
s.publish.append(service_p)
if ap['mounts'] is not None:
s.mounts = []
for param_m in ap['mounts']:
service_m = {}
service_m['readonly'] = param_m['readonly']
service_m['type'] = param_m['type']
if param_m['source'] is None and param_m['type'] != 'tmpfs':
raise ValueError('Source must be specified for mounts which are not of type tmpfs')
service_m['source'] = param_m['source'] or ''
service_m['target'] = param_m['target']
service_m['labels'] = param_m['labels']
service_m['no_copy'] = param_m['no_copy']
service_m['propagation'] = param_m['propagation']
service_m['driver_config'] = param_m['driver_config']
service_m['tmpfs_mode'] = param_m['tmpfs_mode']
tmpfs_size = param_m['tmpfs_size']
if tmpfs_size is not None:
try:
tmpfs_size = human_to_bytes(tmpfs_size)
except ValueError as exc:
raise ValueError(
'Failed to convert tmpfs_size to bytes: %s' % exc
)
service_m['tmpfs_size'] = tmpfs_size
s.mounts.append(service_m)
if ap['configs'] is not None:
s.configs = []
for param_m in ap['configs']:
service_c = {}
config_name = param_m['config_name']
service_c['config_id'] = param_m['config_id'] or config_ids[config_name]
service_c['config_name'] = config_name
service_c['filename'] = param_m['filename'] or config_name
service_c['uid'] = param_m['uid']
service_c['gid'] = param_m['gid']
service_c['mode'] = param_m['mode']
s.configs.append(service_c)
if ap['secrets'] is not None:
s.secrets = []
for param_m in ap['secrets']:
service_s = {}
secret_name = param_m['secret_name']
service_s['secret_id'] = param_m['secret_id'] or secret_ids[secret_name]
service_s['secret_name'] = secret_name
service_s['filename'] = param_m['filename'] or secret_name
service_s['uid'] = param_m['uid']
service_s['gid'] = param_m['gid']
service_s['mode'] = param_m['mode']
s.secrets.append(service_s)
return s
def compare(self, os):
differences = DifferenceTracker()
needs_rebuild = False
force_update = False
if self.endpoint_mode is not None and self.endpoint_mode != os.endpoint_mode:
differences.add('endpoint_mode', parameter=self.endpoint_mode, active=os.endpoint_mode)
if has_list_changed(self.env, os.env):
differences.add('env', parameter=self.env, active=os.env)
if self.log_driver is not None and self.log_driver != os.log_driver:
differences.add('log_driver', parameter=self.log_driver, active=os.log_driver)
if self.log_driver_options is not None and self.log_driver_options != (os.log_driver_options or {}):
differences.add('log_opt', parameter=self.log_driver_options, active=os.log_driver_options)
if self.mode != os.mode:
needs_rebuild = True
differences.add('mode', parameter=self.mode, active=os.mode)
if has_list_changed(self.mounts, os.mounts, sort_key='target'):
differences.add('mounts', parameter=self.mounts, active=os.mounts)
if has_list_changed(self.configs, os.configs, sort_key='config_name'):
differences.add('configs', parameter=self.configs, active=os.configs)
if has_list_changed(self.secrets, os.secrets, sort_key='secret_name'):
differences.add('secrets', parameter=self.secrets, active=os.secrets)
if have_networks_changed(self.networks, os.networks):
differences.add('networks', parameter=self.networks, active=os.networks)
needs_rebuild = not self.can_update_networks
if self.replicas != os.replicas:
differences.add('replicas', parameter=self.replicas, active=os.replicas)
if has_list_changed(self.command, os.command, sort_lists=False):
differences.add('command', parameter=self.command, active=os.command)
if has_list_changed(self.args, os.args, sort_lists=False):
differences.add('args', parameter=self.args, active=os.args)
if has_list_changed(self.constraints, os.constraints):
differences.add('constraints', parameter=self.constraints, active=os.constraints)
if has_list_changed(self.placement_preferences, os.placement_preferences, sort_lists=False):
differences.add('placement_preferences', parameter=self.placement_preferences, active=os.placement_preferences)
if has_list_changed(self.groups, os.groups):
differences.add('groups', parameter=self.groups, active=os.groups)
if self.labels is not None and self.labels != (os.labels or {}):
differences.add('labels', parameter=self.labels, active=os.labels)
if self.limit_cpu is not None and self.limit_cpu != os.limit_cpu:
differences.add('limit_cpu', parameter=self.limit_cpu, active=os.limit_cpu)
if self.limit_memory is not None and self.limit_memory != os.limit_memory:
differences.add('limit_memory', parameter=self.limit_memory, active=os.limit_memory)
if self.reserve_cpu is not None and self.reserve_cpu != os.reserve_cpu:
differences.add('reserve_cpu', parameter=self.reserve_cpu, active=os.reserve_cpu)
if self.reserve_memory is not None and self.reserve_memory != os.reserve_memory:
differences.add('reserve_memory', parameter=self.reserve_memory, active=os.reserve_memory)
if self.container_labels is not None and self.container_labels != (os.container_labels or {}):
differences.add('container_labels', parameter=self.container_labels, active=os.container_labels)
if self.stop_signal is not None and self.stop_signal != os.stop_signal:
differences.add('stop_signal', parameter=self.stop_signal, active=os.stop_signal)
if self.stop_grace_period is not None and self.stop_grace_period != os.stop_grace_period:
differences.add('stop_grace_period', parameter=self.stop_grace_period, active=os.stop_grace_period)
if self.has_publish_changed(os.publish):
differences.add('publish', parameter=self.publish, active=os.publish)
if self.read_only is not None and self.read_only != os.read_only:
differences.add('read_only', parameter=self.read_only, active=os.read_only)
if self.restart_policy is not None and self.restart_policy != os.restart_policy:
differences.add('restart_policy', parameter=self.restart_policy, active=os.restart_policy)
if self.restart_policy_attempts is not None and self.restart_policy_attempts != os.restart_policy_attempts:
differences.add('restart_policy_attempts', parameter=self.restart_policy_attempts, active=os.restart_policy_attempts)
if self.restart_policy_delay is not None and self.restart_policy_delay != os.restart_policy_delay:
differences.add('restart_policy_delay', parameter=self.restart_policy_delay, active=os.restart_policy_delay)
if self.restart_policy_window is not None and self.restart_policy_window != os.restart_policy_window:
differences.add('restart_policy_window', parameter=self.restart_policy_window, active=os.restart_policy_window)
if has_dict_changed(self.rollback_config, os.rollback_config):
differences.add('rollback_config', parameter=self.rollback_config, active=os.rollback_config)
if self.update_delay is not None and self.update_delay != os.update_delay:
differences.add('update_delay', parameter=self.update_delay, active=os.update_delay)
if self.update_parallelism is not None and self.update_parallelism != os.update_parallelism:
differences.add('update_parallelism', parameter=self.update_parallelism, active=os.update_parallelism)
if self.update_failure_action is not None and self.update_failure_action != os.update_failure_action:
differences.add('update_failure_action', parameter=self.update_failure_action, active=os.update_failure_action)
if self.update_monitor is not None and self.update_monitor != os.update_monitor:
differences.add('update_monitor', parameter=self.update_monitor, active=os.update_monitor)
if self.update_max_failure_ratio is not None and self.update_max_failure_ratio != os.update_max_failure_ratio:
differences.add('update_max_failure_ratio', parameter=self.update_max_failure_ratio, active=os.update_max_failure_ratio)
if self.update_order is not None and self.update_order != os.update_order:
differences.add('update_order', parameter=self.update_order, active=os.update_order)
has_image_changed, change = self.has_image_changed(os.image)
if has_image_changed:
differences.add('image', parameter=self.image, active=change)
if self.user and self.user != os.user:
differences.add('user', parameter=self.user, active=os.user)
if has_list_changed(self.dns, os.dns, sort_lists=False):
differences.add('dns', parameter=self.dns, active=os.dns)
if has_list_changed(self.dns_search, os.dns_search, sort_lists=False):
differences.add('dns_search', parameter=self.dns_search, active=os.dns_search)
if has_list_changed(self.dns_options, os.dns_options):
differences.add('dns_options', parameter=self.dns_options, active=os.dns_options)
if self.has_healthcheck_changed(os):
differences.add('healthcheck', parameter=self.healthcheck, active=os.healthcheck)
if self.hostname is not None and self.hostname != os.hostname:
differences.add('hostname', parameter=self.hostname, active=os.hostname)
if self.hosts is not None and self.hosts != (os.hosts or {}):
differences.add('hosts', parameter=self.hosts, active=os.hosts)
if self.tty is not None and self.tty != os.tty:
differences.add('tty', parameter=self.tty, active=os.tty)
if self.working_dir is not None and self.working_dir != os.working_dir:
differences.add('working_dir', parameter=self.working_dir, active=os.working_dir)
if self.force_update:
force_update = True
return not differences.empty or force_update, differences, needs_rebuild, force_update
def has_healthcheck_changed(self, old_publish):
if self.healthcheck_disabled is False and self.healthcheck is None:
return False
if self.healthcheck_disabled and old_publish.healthcheck is None:
return False
return self.healthcheck != old_publish.healthcheck
def has_publish_changed(self, old_publish):
if self.publish is None:
return False
old_publish = old_publish or []
if len(self.publish) != len(old_publish):
return True
publish_sorter = operator.itemgetter('published_port', 'target_port', 'protocol')
publish = sorted(self.publish, key=publish_sorter)
old_publish = sorted(old_publish, key=publish_sorter)
for publish_item, old_publish_item in zip(publish, old_publish):
ignored_keys = set()
if not publish_item.get('mode'):
ignored_keys.add('mode')
# Create copies of publish_item dicts where keys specified in ignored_keys are left out
filtered_old_publish_item = dict(
(k, v) for k, v in old_publish_item.items() if k not in ignored_keys
)
filtered_publish_item = dict(
(k, v) for k, v in publish_item.items() if k not in ignored_keys
)
if filtered_publish_item != filtered_old_publish_item:
return True
return False
def has_image_changed(self, old_image):
if '@' not in self.image:
old_image = old_image.split('@')[0]
return self.image != old_image, old_image
def build_container_spec(self):
mounts = None
if self.mounts is not None:
mounts = []
for mount_config in self.mounts:
mount_options = {
'target': 'target',
'source': 'source',
'type': 'type',
'readonly': 'read_only',
'propagation': 'propagation',
'labels': 'labels',
'no_copy': 'no_copy',
'driver_config': 'driver_config',
'tmpfs_size': 'tmpfs_size',
'tmpfs_mode': 'tmpfs_mode'
}
mount_args = {}
for option, mount_arg in mount_options.items():
value = mount_config.get(option)
if value is not None:
mount_args[mount_arg] = value
mounts.append(types.Mount(**mount_args))
configs = None
if self.configs is not None:
configs = []
for config_config in self.configs:
config_args = {
'config_id': config_config['config_id'],
'config_name': config_config['config_name']
}
filename = config_config.get('filename')
if filename:
config_args['filename'] = filename
uid = config_config.get('uid')
if uid:
config_args['uid'] = uid
gid = config_config.get('gid')
if gid:
config_args['gid'] = gid
mode = config_config.get('mode')
if mode:
config_args['mode'] = mode
configs.append(types.ConfigReference(**config_args))
secrets = None
if self.secrets is not None:
secrets = []
for secret_config in self.secrets:
secret_args = {
'secret_id': secret_config['secret_id'],
'secret_name': secret_config['secret_name']
}
filename = secret_config.get('filename')
if filename:
secret_args['filename'] = filename
uid = secret_config.get('uid')
if uid:
secret_args['uid'] = uid
gid = secret_config.get('gid')
if gid:
secret_args['gid'] = gid
mode = secret_config.get('mode')
if mode:
secret_args['mode'] = mode
secrets.append(types.SecretReference(**secret_args))
dns_config_args = {}
if self.dns is not None:
dns_config_args['nameservers'] = self.dns
if self.dns_search is not None:
dns_config_args['search'] = self.dns_search
if self.dns_options is not None:
dns_config_args['options'] = self.dns_options
dns_config = types.DNSConfig(**dns_config_args) if dns_config_args else None
container_spec_args = {}
if self.command is not None:
container_spec_args['command'] = self.command
if self.args is not None:
container_spec_args['args'] = self.args
if self.env is not None:
container_spec_args['env'] = self.env
if self.user is not None:
container_spec_args['user'] = self.user
if self.container_labels is not None:
container_spec_args['labels'] = self.container_labels
if self.healthcheck is not None:
container_spec_args['healthcheck'] = types.Healthcheck(**self.healthcheck)
if self.hostname is not None:
container_spec_args['hostname'] = self.hostname
if self.hosts is not None:
container_spec_args['hosts'] = self.hosts
if self.read_only is not None:
container_spec_args['read_only'] = self.read_only
if self.stop_grace_period is not None:
container_spec_args['stop_grace_period'] = self.stop_grace_period
if self.stop_signal is not None:
container_spec_args['stop_signal'] = self.stop_signal
if self.tty is not None:
container_spec_args['tty'] = self.tty
if self.groups is not None:
container_spec_args['groups'] = self.groups
if self.working_dir is not None:
container_spec_args['workdir'] = self.working_dir
if secrets is not None:
container_spec_args['secrets'] = secrets
if mounts is not None:
container_spec_args['mounts'] = mounts
if dns_config is not None:
container_spec_args['dns_config'] = dns_config
if configs is not None:
container_spec_args['configs'] = configs
return types.ContainerSpec(self.image, **container_spec_args)
def build_placement(self):
placement_args = {}
if self.constraints is not None:
placement_args['constraints'] = self.constraints
if self.placement_preferences is not None:
placement_args['preferences'] = [
{key.title(): {'SpreadDescriptor': value}}
for preference in self.placement_preferences
for key, value in preference.items()
]
return types.Placement(**placement_args) if placement_args else None
def build_update_config(self):
update_config_args = {}
if self.update_parallelism is not None:
update_config_args['parallelism'] = self.update_parallelism
if self.update_delay is not None:
update_config_args['delay'] = self.update_delay
if self.update_failure_action is not None:
update_config_args['failure_action'] = self.update_failure_action
if self.update_monitor is not None:
update_config_args['monitor'] = self.update_monitor
if self.update_max_failure_ratio is not None:
update_config_args['max_failure_ratio'] = self.update_max_failure_ratio
if self.update_order is not None:
update_config_args['order'] = self.update_order
return types.UpdateConfig(**update_config_args) if update_config_args else None
def build_log_driver(self):
log_driver_args = {}
if self.log_driver is not None:
log_driver_args['name'] = self.log_driver
if self.log_driver_options is not None:
log_driver_args['options'] = self.log_driver_options
return types.DriverConfig(**log_driver_args) if log_driver_args else None
def build_restart_policy(self):
restart_policy_args = {}
if self.restart_policy is not None:
restart_policy_args['condition'] = self.restart_policy
if self.restart_policy_delay is not None:
restart_policy_args['delay'] = self.restart_policy_delay
if self.restart_policy_attempts is not None:
restart_policy_args['max_attempts'] = self.restart_policy_attempts
if self.restart_policy_window is not None:
restart_policy_args['window'] = self.restart_policy_window
return types.RestartPolicy(**restart_policy_args) if restart_policy_args else None
def build_rollback_config(self):
if self.rollback_config is None:
return None
rollback_config_options = [
'parallelism',
'delay',
'failure_action',
'monitor',
'max_failure_ratio',
'order',
]
rollback_config_args = {}
for option in rollback_config_options:
value = self.rollback_config.get(option)
if value is not None:
rollback_config_args[option] = value
return types.RollbackConfig(**rollback_config_args) if rollback_config_args else None
def build_resources(self):
resources_args = {}
if self.limit_cpu is not None:
resources_args['cpu_limit'] = int(self.limit_cpu * 1000000000.0)
if self.limit_memory is not None:
resources_args['mem_limit'] = self.limit_memory
if self.reserve_cpu is not None:
resources_args['cpu_reservation'] = int(self.reserve_cpu * 1000000000.0)
if self.reserve_memory is not None:
resources_args['mem_reservation'] = self.reserve_memory
return types.Resources(**resources_args) if resources_args else None
def build_task_template(self, container_spec, placement=None):
log_driver = self.build_log_driver()
restart_policy = self.build_restart_policy()
resources = self.build_resources()
task_template_args = {}
if placement is not None:
task_template_args['placement'] = placement
if log_driver is not None:
task_template_args['log_driver'] = log_driver
if restart_policy is not None:
task_template_args['restart_policy'] = restart_policy
if resources is not None:
task_template_args['resources'] = resources
if self.force_update:
task_template_args['force_update'] = self.force_update
if self.can_use_task_template_networks:
networks = self.build_networks()
if networks:
task_template_args['networks'] = networks
return types.TaskTemplate(container_spec=container_spec, **task_template_args)
def build_service_mode(self):
if self.mode == 'global':
self.replicas = None
return types.ServiceMode(self.mode, replicas=self.replicas)
def build_networks(self):
networks = None
if self.networks is not None:
networks = []
for network in self.networks:
docker_network = {'Target': network['id']}
if 'aliases' in network:
docker_network['Aliases'] = network['aliases']
if 'options' in network:
docker_network['DriverOpts'] = network['options']
networks.append(docker_network)
return networks
def build_endpoint_spec(self):
endpoint_spec_args = {}
if self.publish is not None:
ports = []
for port in self.publish:
port_spec = {
'Protocol': port['protocol'],
'PublishedPort': port['published_port'],
'TargetPort': port['target_port']
}
if port.get('mode'):
port_spec['PublishMode'] = port['mode']
ports.append(port_spec)
endpoint_spec_args['ports'] = ports
if self.endpoint_mode is not None:
endpoint_spec_args['mode'] = self.endpoint_mode
return types.EndpointSpec(**endpoint_spec_args) if endpoint_spec_args else None
def build_docker_service(self):
container_spec = self.build_container_spec()
placement = self.build_placement()
task_template = self.build_task_template(container_spec, placement)
update_config = self.build_update_config()
rollback_config = self.build_rollback_config()
service_mode = self.build_service_mode()
endpoint_spec = self.build_endpoint_spec()
service = {'task_template': task_template, 'mode': service_mode}
if update_config:
service['update_config'] = update_config
if rollback_config:
service['rollback_config'] = rollback_config
if endpoint_spec:
service['endpoint_spec'] = endpoint_spec
if self.labels:
service['labels'] = self.labels
if not self.can_use_task_template_networks:
networks = self.build_networks()
if networks:
service['networks'] = networks
return service
class DockerServiceManager(object):
def __init__(self, client):
self.client = client
self.retries = 2
self.diff_tracker = None
def get_service(self, name):
try:
raw_data = self.client.inspect_service(name)
except NotFound:
return None
ds = DockerService(self.client.docker_api_version, self.client.docker_py_version)
task_template_data = raw_data['Spec']['TaskTemplate']
ds.image = task_template_data['ContainerSpec']['Image']
ds.user = task_template_data['ContainerSpec'].get('User')
ds.env = task_template_data['ContainerSpec'].get('Env')
ds.command = task_template_data['ContainerSpec'].get('Command')
ds.args = task_template_data['ContainerSpec'].get('Args')
ds.groups = task_template_data['ContainerSpec'].get('Groups')
ds.stop_grace_period = task_template_data['ContainerSpec'].get('StopGracePeriod')
ds.stop_signal = task_template_data['ContainerSpec'].get('StopSignal')
ds.working_dir = task_template_data['ContainerSpec'].get('Dir')
ds.read_only = task_template_data['ContainerSpec'].get('ReadOnly')
healthcheck_data = task_template_data['ContainerSpec'].get('Healthcheck')
if healthcheck_data:
options = ['test', 'interval', 'timeout', 'start_period', 'retries']
healthcheck = dict(
(key.lower(), value) for key, value in healthcheck_data.items()
if value is not None and key.lower() in options
)
ds.healthcheck = healthcheck
update_config_data = raw_data['Spec'].get('UpdateConfig')
if update_config_data:
ds.update_delay = update_config_data.get('Delay')
ds.update_parallelism = update_config_data.get('Parallelism')
ds.update_failure_action = update_config_data.get('FailureAction')
ds.update_monitor = update_config_data.get('Monitor')
ds.update_max_failure_ratio = update_config_data.get('MaxFailureRatio')
ds.update_order = update_config_data.get('Order')
rollback_config_data = raw_data['Spec'].get('RollbackConfig')
if rollback_config_data:
ds.rollback_config = {
'parallelism': rollback_config_data.get('Parallelism'),
'delay': rollback_config_data.get('Delay'),
'failure_action': rollback_config_data.get('FailureAction'),
'monitor': rollback_config_data.get('Monitor'),
'max_failure_ratio': rollback_config_data.get('MaxFailureRatio'),
'order': rollback_config_data.get('Order'),
}
dns_config = task_template_data['ContainerSpec'].get('DNSConfig')
if dns_config:
ds.dns = dns_config.get('Nameservers')
ds.dns_search = dns_config.get('Search')
ds.dns_options = dns_config.get('Options')
ds.hostname = task_template_data['ContainerSpec'].get('Hostname')
hosts = task_template_data['ContainerSpec'].get('Hosts')
if hosts:
hosts = [
list(reversed(host.split(":", 1)))
if ":" in host
else host.split(" ", 1)
for host in hosts
]
ds.hosts = dict((hostname, ip) for ip, hostname in hosts)
ds.tty = task_template_data['ContainerSpec'].get('TTY')
placement = task_template_data.get('Placement')
if placement:
ds.constraints = placement.get('Constraints')
placement_preferences = []
for preference in placement.get('Preferences', []):
placement_preferences.append(
dict(
(key.lower(), value['SpreadDescriptor'])
for key, value in preference.items()
)
)
ds.placement_preferences = placement_preferences or None
restart_policy_data = task_template_data.get('RestartPolicy')
if restart_policy_data:
ds.restart_policy = restart_policy_data.get('Condition')
ds.restart_policy_delay = restart_policy_data.get('Delay')
ds.restart_policy_attempts = restart_policy_data.get('MaxAttempts')
ds.restart_policy_window = restart_policy_data.get('Window')
raw_data_endpoint_spec = raw_data['Spec'].get('EndpointSpec')
if raw_data_endpoint_spec:
ds.endpoint_mode = raw_data_endpoint_spec.get('Mode')
raw_data_ports = raw_data_endpoint_spec.get('Ports')
if raw_data_ports:
ds.publish = []
for port in raw_data_ports:
ds.publish.append({
'protocol': port['Protocol'],
'mode': port.get('PublishMode', None),
'published_port': int(port['PublishedPort']),
'target_port': int(port['TargetPort'])
})
raw_data_limits = task_template_data.get('Resources', {}).get('Limits')
if raw_data_limits:
raw_cpu_limits = raw_data_limits.get('NanoCPUs')
if raw_cpu_limits:
ds.limit_cpu = float(raw_cpu_limits) / 1000000000
raw_memory_limits = raw_data_limits.get('MemoryBytes')
if raw_memory_limits:
ds.limit_memory = int(raw_memory_limits)
raw_data_reservations = task_template_data.get('Resources', {}).get('Reservations')
if raw_data_reservations:
raw_cpu_reservations = raw_data_reservations.get('NanoCPUs')
if raw_cpu_reservations:
ds.reserve_cpu = float(raw_cpu_reservations) / 1000000000
raw_memory_reservations = raw_data_reservations.get('MemoryBytes')
if raw_memory_reservations:
ds.reserve_memory = int(raw_memory_reservations)
ds.labels = raw_data['Spec'].get('Labels')
ds.log_driver = task_template_data.get('LogDriver', {}).get('Name')
ds.log_driver_options = task_template_data.get('LogDriver', {}).get('Options')
ds.container_labels = task_template_data['ContainerSpec'].get('Labels')
mode = raw_data['Spec']['Mode']
if 'Replicated' in mode.keys():
ds.mode = to_text('replicated', encoding='utf-8')
ds.replicas = mode['Replicated']['Replicas']
elif 'Global' in mode.keys():
ds.mode = 'global'
else:
raise Exception('Unknown service mode: %s' % mode)
raw_data_mounts = task_template_data['ContainerSpec'].get('Mounts')
if raw_data_mounts:
ds.mounts = []
for mount_data in raw_data_mounts:
bind_options = mount_data.get('BindOptions', {})
volume_options = mount_data.get('VolumeOptions', {})
tmpfs_options = mount_data.get('TmpfsOptions', {})
driver_config = volume_options.get('DriverConfig', {})
driver_config = dict(
(key.lower(), value) for key, value in driver_config.items()
) or None
ds.mounts.append({
'source': mount_data.get('Source', ''),
'type': mount_data['Type'],
'target': mount_data['Target'],
'readonly': mount_data.get('ReadOnly'),
'propagation': bind_options.get('Propagation'),
'no_copy': volume_options.get('NoCopy'),
'labels': volume_options.get('Labels'),
'driver_config': driver_config,
'tmpfs_mode': tmpfs_options.get('Mode'),
'tmpfs_size': tmpfs_options.get('SizeBytes'),
})
raw_data_configs = task_template_data['ContainerSpec'].get('Configs')
if raw_data_configs:
ds.configs = []
for config_data in raw_data_configs:
ds.configs.append({
'config_id': config_data['ConfigID'],
'config_name': config_data['ConfigName'],
'filename': config_data['File'].get('Name'),
'uid': config_data['File'].get('UID'),
'gid': config_data['File'].get('GID'),
'mode': config_data['File'].get('Mode')
})
raw_data_secrets = task_template_data['ContainerSpec'].get('Secrets')
if raw_data_secrets:
ds.secrets = []
for secret_data in raw_data_secrets:
ds.secrets.append({
'secret_id': secret_data['SecretID'],
'secret_name': secret_data['SecretName'],
'filename': secret_data['File'].get('Name'),
'uid': secret_data['File'].get('UID'),
'gid': secret_data['File'].get('GID'),
'mode': secret_data['File'].get('Mode')
})
raw_networks_data = task_template_data.get('Networks', raw_data['Spec'].get('Networks'))
if raw_networks_data:
ds.networks = []
for network_data in raw_networks_data:
network = {'id': network_data['Target']}
if 'Aliases' in network_data:
network['aliases'] = network_data['Aliases']
if 'DriverOpts' in network_data:
network['options'] = network_data['DriverOpts']
ds.networks.append(network)
ds.service_version = raw_data['Version']['Index']
ds.service_id = raw_data['ID']
return ds
def update_service(self, name, old_service, new_service):
service_data = new_service.build_docker_service()
result = self.client.update_service(
old_service.service_id,
old_service.service_version,
name=name,
**service_data
)
# Prior to Docker SDK 4.0.0 no warnings were returned and will thus be ignored.
# (see https://github.com/docker/docker-py/pull/2272)
self.client.report_warnings(result, ['Warning'])
def create_service(self, name, service):
service_data = service.build_docker_service()
result = self.client.create_service(name=name, **service_data)
self.client.report_warnings(result, ['Warning'])
def remove_service(self, name):
self.client.remove_service(name)
def get_image_digest(self, name, resolve=False):
if (
not name
or not resolve
):
return name
repo, tag = parse_repository_tag(name)
if not tag:
tag = 'latest'
name = repo + ':' + tag
distribution_data = self.client.inspect_distribution(name)
digest = distribution_data['Descriptor']['digest']
return '%s@%s' % (name, digest)
def get_networks_names_ids(self):
return dict(
(network['Name'], network['Id']) for network in self.client.networks()
)
def get_missing_secret_ids(self):
"""
Resolve missing secret ids by looking them up by name
"""
secret_names = [
secret['secret_name']
for secret in self.client.module.params.get('secrets') or []
if secret['secret_id'] is None
]
if not secret_names:
return {}
secrets = self.client.secrets(filters={'name': secret_names})
secrets = dict(
(secret['Spec']['Name'], secret['ID'])
for secret in secrets
if secret['Spec']['Name'] in secret_names
)
for secret_name in secret_names:
if secret_name not in secrets:
self.client.fail(
'Could not find a secret named "%s"' % secret_name
)
return secrets
def get_missing_config_ids(self):
"""
Resolve missing config ids by looking them up by name
"""
config_names = [
config['config_name']
for config in self.client.module.params.get('configs') or []
if config['config_id'] is None
]
if not config_names:
return {}
configs = self.client.configs(filters={'name': config_names})
configs = dict(
(config['Spec']['Name'], config['ID'])
for config in configs
if config['Spec']['Name'] in config_names
)
for config_name in config_names:
if config_name not in configs:
self.client.fail(
'Could not find a config named "%s"' % config_name
)
return configs
def run(self):
self.diff_tracker = DifferenceTracker()
module = self.client.module
image = module.params['image']
try:
image_digest = self.get_image_digest(
name=image,
resolve=module.params['resolve_image']
)
except DockerException as e:
self.client.fail(
'Error looking for an image named %s: %s'
% (image, e)
)
try:
current_service = self.get_service(module.params['name'])
except Exception as e:
self.client.fail(
'Error looking for service named %s: %s'
% (module.params['name'], e)
)
try:
secret_ids = self.get_missing_secret_ids()
config_ids = self.get_missing_config_ids()
network_ids = self.get_networks_names_ids()
new_service = DockerService.from_ansible_params(
module.params,
current_service,
image_digest,
secret_ids,
config_ids,
network_ids,
self.client.docker_api_version,
self.client.docker_py_version
)
except Exception as e:
return self.client.fail(
'Error parsing module parameters: %s' % e
)
changed = False
msg = 'noop'
rebuilt = False
differences = DifferenceTracker()
facts = {}
if current_service:
if module.params['state'] == 'absent':
if not module.check_mode:
self.remove_service(module.params['name'])
msg = 'Service removed'
changed = True
else:
changed, differences, need_rebuild, force_update = new_service.compare(
current_service
)
if changed:
self.diff_tracker.merge(differences)
if need_rebuild:
if not module.check_mode:
self.remove_service(module.params['name'])
self.create_service(
module.params['name'],
new_service
)
msg = 'Service rebuilt'
rebuilt = True
else:
if not module.check_mode:
self.update_service(
module.params['name'],
current_service,
new_service
)
msg = 'Service updated'
rebuilt = False
else:
if force_update:
if not module.check_mode:
self.update_service(
module.params['name'],
current_service,
new_service
)
msg = 'Service forcefully updated'
rebuilt = False
changed = True
else:
msg = 'Service unchanged'
facts = new_service.get_facts()
else:
if module.params['state'] == 'absent':
msg = 'Service absent'
else:
if not module.check_mode:
self.create_service(module.params['name'], new_service)
msg = 'Service created'
changed = True
facts = new_service.get_facts()
return msg, changed, rebuilt, differences.get_legacy_docker_diffs(), facts
def run_safe(self):
while True:
try:
return self.run()
except APIError as e:
# Sometimes Version.Index will have changed between an inspect and
# update. If this is encountered we'll retry the update.
if self.retries > 0 and 'update out of sequence' in str(e.explanation):
self.retries -= 1
time.sleep(1)
else:
raise
def _detect_publish_mode_usage(client):
for publish_def in client.module.params['publish'] or []:
if publish_def.get('mode'):
return True
return False
def _detect_healthcheck_start_period(client):
if client.module.params['healthcheck']:
return client.module.params['healthcheck']['start_period'] is not None
return False
def _detect_mount_tmpfs_usage(client):
for mount in client.module.params['mounts'] or []:
if mount.get('type') == 'tmpfs':
return True
if mount.get('tmpfs_size') is not None:
return True
if mount.get('tmpfs_mode') is not None:
return True
return False
def _detect_update_config_failure_action_rollback(client):
rollback_config_failure_action = (
(client.module.params['update_config'] or {}).get('failure_action')
)
update_failure_action = client.module.params['update_failure_action']
failure_action = rollback_config_failure_action or update_failure_action
return failure_action == 'rollback'
def main():
argument_spec = dict(
name=dict(type='str', required=True),
image=dict(type='str'),
state=dict(type='str', default='present', choices=['present', 'absent']),
mounts=dict(type='list', elements='dict', options=dict(
source=dict(type='str'),
target=dict(type='str', required=True),
type=dict(
type='str',
default='bind',
choices=['bind', 'volume', 'tmpfs', 'npipe'],
),
readonly=dict(type='bool'),
labels=dict(type='dict'),
propagation=dict(
type='str',
choices=[
'shared',
'slave',
'private',
'rshared',
'rslave',
'rprivate'
]
),
no_copy=dict(type='bool'),
driver_config=dict(type='dict', options=dict(
name=dict(type='str'),
options=dict(type='dict')
)),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='int')
)),
configs=dict(type='list', elements='dict', options=dict(
config_id=dict(type='str'),
config_name=dict(type='str', required=True),
filename=dict(type='str'),
uid=dict(type='str'),
gid=dict(type='str'),
mode=dict(type='int'),
)),
secrets=dict(type='list', elements='dict', options=dict(
secret_id=dict(type='str'),
secret_name=dict(type='str', required=True),
filename=dict(type='str'),
uid=dict(type='str'),
gid=dict(type='str'),
mode=dict(type='int'),
)),
networks=dict(type='list', elements='raw'),
command=dict(type='raw'),
args=dict(type='list', elements='str'),
env=dict(type='raw'),
env_files=dict(type='list', elements='path'),
force_update=dict(type='bool', default=False),
groups=dict(type='list', elements='str'),
logging=dict(type='dict', options=dict(
driver=dict(type='str'),
options=dict(type='dict'),
)),
log_driver=dict(type='str', removed_in_version='2.12'),
log_driver_options=dict(type='dict', removed_in_version='2.12'),
publish=dict(type='list', elements='dict', options=dict(
published_port=dict(type='int', required=True),
target_port=dict(type='int', required=True),
protocol=dict(type='str', default='tcp', choices=['tcp', 'udp']),
mode=dict(type='str', choices=['ingress', 'host']),
)),
placement=dict(type='dict', options=dict(
constraints=dict(type='list', elements='str'),
preferences=dict(type='list', elements='dict'),
)),
constraints=dict(type='list', elements='str', removed_in_version='2.12'),
tty=dict(type='bool'),
dns=dict(type='list', elements='str'),
dns_search=dict(type='list', elements='str'),
dns_options=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
hosts=dict(type='dict'),
labels=dict(type='dict'),
container_labels=dict(type='dict'),
mode=dict(
type='str',
default='replicated',
choices=['replicated', 'global']
),
replicas=dict(type='int', default=-1),
endpoint_mode=dict(type='str', choices=['vip', 'dnsrr']),
stop_grace_period=dict(type='str'),
stop_signal=dict(type='str'),
limits=dict(type='dict', options=dict(
cpus=dict(type='float'),
memory=dict(type='str'),
)),
limit_cpu=dict(type='float', removed_in_version='2.12'),
limit_memory=dict(type='str', removed_in_version='2.12'),
read_only=dict(type='bool'),
reservations=dict(type='dict', options=dict(
cpus=dict(type='float'),
memory=dict(type='str'),
)),
reserve_cpu=dict(type='float', removed_in_version='2.12'),
reserve_memory=dict(type='str', removed_in_version='2.12'),
resolve_image=dict(type='bool', default=False),
restart_config=dict(type='dict', options=dict(
condition=dict(type='str', choices=['none', 'on-failure', 'any']),
delay=dict(type='str'),
max_attempts=dict(type='int'),
window=dict(type='str'),
)),
restart_policy=dict(
type='str',
choices=['none', 'on-failure', 'any'],
removed_in_version='2.12'
),
restart_policy_delay=dict(type='raw', removed_in_version='2.12'),
restart_policy_attempts=dict(type='int', removed_in_version='2.12'),
restart_policy_window=dict(type='raw', removed_in_version='2.12'),
rollback_config=dict(type='dict', options=dict(
parallelism=dict(type='int'),
delay=dict(type='str'),
failure_action=dict(
type='str',
choices=['continue', 'pause']
),
monitor=dict(type='str'),
max_failure_ratio=dict(type='float'),
order=dict(type='str'),
)),
update_config=dict(type='dict', options=dict(
parallelism=dict(type='int'),
delay=dict(type='str'),
failure_action=dict(
type='str',
choices=['continue', 'pause', 'rollback']
),
monitor=dict(type='str'),
max_failure_ratio=dict(type='float'),
order=dict(type='str'),
)),
update_delay=dict(type='raw', removed_in_version='2.12'),
update_parallelism=dict(type='int', removed_in_version='2.12'),
update_failure_action=dict(
type='str',
choices=['continue', 'pause', 'rollback'],
removed_in_version='2.12'
),
update_monitor=dict(type='raw', removed_in_version='2.12'),
update_max_failure_ratio=dict(type='float', removed_in_version='2.12'),
update_order=dict(
type='str',
choices=['stop-first', 'start-first'],
removed_in_version='2.12'
),
user=dict(type='str'),
working_dir=dict(type='str'),
)
option_minimal_versions = dict(
constraints=dict(docker_py_version='2.4.0'),
dns=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
dns_options=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
dns_search=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
endpoint_mode=dict(docker_py_version='3.0.0', docker_api_version='1.25'),
force_update=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
hostname=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
hosts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
groups=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
tty=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
secrets=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
configs=dict(docker_py_version='2.6.0', docker_api_version='1.30'),
update_max_failure_ratio=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
update_monitor=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
update_order=dict(docker_py_version='2.7.0', docker_api_version='1.29'),
stop_signal=dict(docker_py_version='2.6.0', docker_api_version='1.28'),
publish=dict(docker_py_version='3.0.0', docker_api_version='1.25'),
read_only=dict(docker_py_version='2.6.0', docker_api_version='1.28'),
resolve_image=dict(docker_api_version='1.30', docker_py_version='3.2.0'),
rollback_config=dict(docker_py_version='3.5.0', docker_api_version='1.28'),
# specials
publish_mode=dict(
docker_py_version='3.0.0',
docker_api_version='1.25',
detect_usage=_detect_publish_mode_usage,
usage_msg='set publish.mode'
),
healthcheck_start_period=dict(
docker_py_version='2.4.0',
docker_api_version='1.25',
detect_usage=_detect_healthcheck_start_period,
usage_msg='set healthcheck.start_period'
),
update_config_max_failure_ratio=dict(
docker_py_version='2.1.0',
docker_api_version='1.25',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'max_failure_ratio'
) is not None,
usage_msg='set update_config.max_failure_ratio'
),
update_config_failure_action=dict(
docker_py_version='3.5.0',
docker_api_version='1.28',
detect_usage=_detect_update_config_failure_action_rollback,
usage_msg='set update_config.failure_action.rollback'
),
update_config_monitor=dict(
docker_py_version='2.1.0',
docker_api_version='1.25',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'monitor'
) is not None,
usage_msg='set update_config.monitor'
),
update_config_order=dict(
docker_py_version='2.7.0',
docker_api_version='1.29',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'order'
) is not None,
usage_msg='set update_config.order'
),
placement_config_preferences=dict(
docker_py_version='2.4.0',
docker_api_version='1.27',
detect_usage=lambda c: (c.module.params['placement'] or {}).get(
'preferences'
) is not None,
usage_msg='set placement.preferences'
),
placement_config_constraints=dict(
docker_py_version='2.4.0',
detect_usage=lambda c: (c.module.params['placement'] or {}).get(
'constraints'
) is not None,
usage_msg='set placement.constraints'
),
mounts_tmpfs=dict(
docker_py_version='2.6.0',
detect_usage=_detect_mount_tmpfs_usage,
usage_msg='set mounts.tmpfs'
),
rollback_config_order=dict(
docker_api_version='1.29',
detect_usage=lambda c: (c.module.params['rollback_config'] or {}).get(
'order'
) is not None,
usage_msg='set rollback_config.order'
),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_version='2.0.2',
min_docker_api_version='1.24',
option_minimal_versions=option_minimal_versions,
)
try:
dsm = DockerServiceManager(client)
msg, changed, rebuilt, changes, facts = dsm.run_safe()
results = dict(
msg=msg,
changed=changed,
rebuilt=rebuilt,
changes=changes,
swarm_service=facts,
)
if client.module._diff:
before, after = dsm.diff_tracker.get_before_after()
results['diff'] = dict(before=before, after=after)
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,146 |
docker_swarm_service healthcheck always changed
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using `healthcheck` in `docker_swarm_service`, it is always reported as having changed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_swarm_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /mnt/f/Users/bozho/.ansible.cfg
configured module search path = ['/mnt/f/Users/bozho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 27 2019, 15:43:29) [GCC 9.2.1 20191022]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_CONTROL_PATH_DIR(/mnt/f/Users/bozho/.ansible.cfg) = /tmp/ansible/cp
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian Bullseye on WSL:
```
Linux bozho-host 4.4.0-18362-Microsoft #476-Microsoft Fri Nov 01 16:53:00 PST 2019 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've added `healthcheck` option to an existing `docker_swarm_service` entry that was behaving as expected starting a service from a custom nginx image in global mode
<!--- Paste example playbooks or commands between quotes below -->
```yaml
docker_swarm_service:
name: "custom-nginx"
image: "{{ nginx_image_name }}"
labels:
tag: "my-tag"
configs:
- config_id: "{{ nginx_conf_config.config_id }}"
config_name: "{{ docker_nginx_conf_config_name }}"
filename: "/etc/nginx/nginx.conf"
mode: 0644
- config_id: "{{ default_conf_config.config_id }}"
config_name: "{{ docker_default_conf_config_name }}"
filename: "/etc/nginx/sites-enabled/default.conf"
mode: 0644
secrets: "{{ nginx_secrets_list | default([]) }}"
mode: global
update_config:
delay: 1s
failure_action: rollback
order: stop-first
healthcheck:
test: [ "CMD", "curl", "--fail", "-k", "https://myhost.example.com" ]
interval: 1m30s
timeout: 5s
retries: 3
start_period: 20s
logging:
driver: local
options:
"max-size": 10k
"max-file": "3"
publish:
- published_port: 80
target_port: 80
protocol: tcp
mode: host
- published_port: 443
target_port: 443
protocol: tcp
mode: host
networks:
- my_network
run_once: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Re-running the playbook should have no changes
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
{
"changed": true,
"changes": [
"healthcheck"
],
"msg": "Service updated",
"rebuilt": false,
"swarm_service": {
"args": null,
"command": null,
"configs": [
...
],
...
}
```
|
https://github.com/ansible/ansible/issues/66146
|
https://github.com/ansible/ansible/pull/66151
|
f7d412144616f538f2b82a40f2698934d374f7c1
|
f31b8e08b24f1c19c156caf5137ca18159f32df0
| 2019-12-31T16:01:14Z |
python
| 2020-01-02T14:23:36Z |
test/integration/targets/docker_swarm_service/tasks/tests/options.yml
|
---
- name: Registering service name
set_fact:
service_name: "{{ name_prefix ~ '-options' }}"
- name: Registering service name
set_fact:
service_names: "{{ service_names + [service_name] }}"
####################################################################
## args ############################################################
####################################################################
- name: args
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
args:
- sleep
- "3600"
register: args_1
- name: args (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
args:
- sleep
- "3600"
register: args_2
- name: args (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
args:
- sleep
- "3400"
register: args_3
- name: args (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
args: []
register: args_4
- name: args (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
args: []
register: args_5
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- args_1 is changed
- args_2 is not changed
- args_3 is changed
- args_4 is changed
- args_5 is not changed
####################################################################
## command #########################################################
####################################################################
- name: command
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
register: command_1
- name: command (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
register: command_2
- name: command (less parameters)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -c "sleep 10m"'
register: command_3
- name: command (as list)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command:
- "/bin/sh"
- "-c"
- "sleep 10m"
register: command_4
- name: command (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: []
register: command_5
- name: command (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: []
register: command_6
- name: command (string failure)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: yes
register: command_7
ignore_errors: yes
- name: command (list failure)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command:
- "/bin/sh"
- yes
register: command_8
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- command_1 is changed
- command_2 is not changed
- command_3 is changed
- command_4 is not changed
- command_5 is changed
- command_6 is not changed
- command_7 is failed
- command_8 is failed
####################################################################
## container_labels ################################################
####################################################################
- name: container_labels
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
container_labels:
test_1: "1"
test_2: "2"
register: container_labels_1
- name: container_labels (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
container_labels:
test_1: "1"
test_2: "2"
register: container_labels_2
- name: container_labels (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
container_labels:
test_1: "1"
test_2: "3"
register: container_labels_3
- name: container_labels (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
container_labels: {}
register: container_labels_4
- name: container_labels (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
container_labels: {}
register: container_labels_5
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- container_labels_1 is changed
- container_labels_2 is not changed
- container_labels_3 is changed
- container_labels_4 is changed
- container_labels_5 is not changed
####################################################################
## dns #############################################################
####################################################################
- name: dns
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns:
- 1.1.1.1
- 8.8.8.8
register: dns_1
ignore_errors: yes
- name: dns (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns:
- 1.1.1.1
- 8.8.8.8
register: dns_2
ignore_errors: yes
- name: dns_servers (changed order)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns:
- 8.8.8.8
- 1.1.1.1
register: dns_3
ignore_errors: yes
- name: dns_servers (changed elements)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns:
- 8.8.8.8
- 9.9.9.9
register: dns_4
ignore_errors: yes
- name: dns_servers (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns: []
register: dns_5
ignore_errors: yes
- name: dns_servers (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns: []
register: dns_6
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- dns_1 is changed
- dns_2 is not changed
- dns_3 is changed
- dns_4 is changed
- dns_5 is changed
- dns_6 is not changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- dns_1 is failed
- "'Minimum version required' in dns_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.6.0', '<')
####################################################################
## dns_options #####################################################
####################################################################
- name: dns_options
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_options:
- "timeout:10"
- rotate
register: dns_options_1
ignore_errors: yes
- name: dns_options (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_options:
- "timeout:10"
- rotate
register: dns_options_2
ignore_errors: yes
- name: dns_options (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_options:
- "timeout:10"
- no-check-names
register: dns_options_3
ignore_errors: yes
- name: dns_options (order idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_options:
- no-check-names
- "timeout:10"
register: dns_options_4
ignore_errors: yes
- name: dns_options (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_options: []
register: dns_options_5
ignore_errors: yes
- name: dns_options (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_options: []
register: dns_options_6
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- dns_options_1 is changed
- dns_options_2 is not changed
- dns_options_3 is changed
- dns_options_4 is not changed
- dns_options_5 is changed
- dns_options_6 is not changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- dns_options_1 is failed
- "'Minimum version required' in dns_options_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.6.0', '<')
####################################################################
## dns_search ######################################################
####################################################################
- name: dns_search
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_search:
- example.com
- example.org
register: dns_search_1
ignore_errors: yes
- name: dns_search (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_search:
- example.com
- example.org
register: dns_search_2
ignore_errors: yes
- name: dns_search (different order)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_search:
- example.org
- example.com
register: dns_search_3
ignore_errors: yes
- name: dns_search (changed elements)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_search:
- ansible.com
- example.com
register: dns_search_4
ignore_errors: yes
- name: dns_search (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_search: []
register: dns_search_5
ignore_errors: yes
- name: dns_search (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
dns_search: []
register: dns_search_6
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- dns_search_1 is changed
- dns_search_2 is not changed
- dns_search_3 is changed
- dns_search_4 is changed
- dns_search_5 is changed
- dns_search_6 is not changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- dns_search_1 is failed
- "'Minimum version required' in dns_search_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.6.0', '<')
####################################################################
## endpoint_mode ###################################################
####################################################################
- name: endpoint_mode
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
endpoint_mode: "dnsrr"
register: endpoint_mode_1
ignore_errors: yes
- name: endpoint_mode (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
endpoint_mode: "dnsrr"
register: endpoint_mode_2
ignore_errors: yes
- name: endpoint_mode (changes)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
endpoint_mode: "vip"
register: endpoint_mode_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- endpoint_mode_1 is changed
- endpoint_mode_2 is not changed
- endpoint_mode_3 is changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('3.0.0', '>=')
- assert:
that:
- endpoint_mode_1 is failed
- "'Minimum version required' in endpoint_mode_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('3.0.0', '<')
####################################################################
## env #############################################################
####################################################################
- name: env
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
env:
- "TEST1=val1"
- "TEST2=val2"
register: env_1
- name: env (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
env:
TEST1: val1
TEST2: val2
register: env_2
- name: env (changes)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
env:
- "TEST1=val1"
- "TEST2=val3"
register: env_3
- name: env (order idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
env:
- "TEST2=val3"
- "TEST1=val1"
register: env_4
- name: env (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
env: []
register: env_5
- name: env (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
env: []
register: env_6
- name: env (fail unwrapped values)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env:
TEST1: true
register: env_7
ignore_errors: yes
- name: env (fail invalid formatted string)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env:
- "TEST1=val3"
- "TEST2"
register: env_8
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- env_1 is changed
- env_2 is not changed
- env_3 is changed
- env_4 is not changed
- env_5 is changed
- env_6 is not changed
- env_7 is failed
- env_8 is failed
####################################################################
## env_files #######################################################
####################################################################
- name: env_files
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files:
- "{{ role_path }}/files/env-file-1"
register: env_file_1
- name: env_files (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files:
- "{{ role_path }}/files/env-file-1"
register: env_file_2
- name: env_files (more items)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files:
- "{{ role_path }}/files/env-file-1"
- "{{ role_path }}/files/env-file-2"
register: env_file_3
- name: env_files (order)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files:
- "{{ role_path }}/files/env-file-2"
- "{{ role_path }}/files/env-file-1"
register: env_file_4
- name: env_files (multiple idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files:
- "{{ role_path }}/files/env-file-2"
- "{{ role_path }}/files/env-file-1"
register: env_file_5
- name: env_files (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files: []
register: env_file_6
- name: env_files (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
env_files: []
register: env_file_7
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- env_file_1 is changed
- env_file_2 is not changed
- env_file_3 is changed
- env_file_4 is changed
- env_file_5 is not changed
- env_file_6 is changed
- env_file_7 is not changed
###################################################################
## force_update ###################################################
###################################################################
- name: force_update
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
args:
- sleep
- "3600"
force_update: yes
register: force_update_1
ignore_errors: yes
- name: force_update (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
args:
- sleep
- "3600"
force_update: yes
register: force_update_2
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- force_update_1 is changed
- force_update_2 is changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.1.0', '>=')
- assert:
that:
- force_update_1 is failed
- "'Minimum version required' in force_update_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.1.0', '<')
####################################################################
## groups ##########################################################
####################################################################
- name: groups
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
groups:
- "1234"
- "5678"
register: groups_1
ignore_errors: yes
- name: groups (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
groups:
- "1234"
- "5678"
register: groups_2
ignore_errors: yes
- name: groups (order idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
groups:
- "5678"
- "1234"
register: groups_3
ignore_errors: yes
- name: groups (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
groups:
- "1234"
register: groups_4
ignore_errors: yes
- name: groups (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
groups: []
register: groups_5
ignore_errors: yes
- name: groups (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
groups: []
register: groups_6
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- groups_1 is changed
- groups_2 is not changed
- groups_3 is not changed
- groups_4 is changed
- groups_5 is changed
- groups_6 is not changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- groups_1 is failed
- "'Minimum version required' in groups_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.6.0', '<')
####################################################################
## healthcheck #####################################################
####################################################################
- name: healthcheck
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test:
- CMD
- sleep
- "1"
timeout: 2s
interval: 0h0m2s3ms4us
retries: 2
register: healthcheck_1
ignore_errors: yes
- name: healthcheck (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test:
- CMD
- sleep
- 1
timeout: 2s
interval: 0h0m2s3ms4us
retries: 2
register: healthcheck_2
ignore_errors: yes
- name: healthcheck (changed)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test:
- CMD
- sleep
- "1"
timeout: 3s
interval: 0h1m2s3ms4us
retries: 3
register: healthcheck_3
ignore_errors: yes
- name: healthcheck (disabled)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test:
- NONE
register: healthcheck_4
ignore_errors: yes
- name: healthcheck (disabled, idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test:
- NONE
register: healthcheck_5
ignore_errors: yes
- name: healthcheck (string in healthcheck test, changed)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test: "sleep 1"
register: healthcheck_6
ignore_errors: yes
- name: healthcheck (string in healthcheck test, idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck:
test: "sleep 1"
register: healthcheck_7
ignore_errors: yes
- name: healthcheck (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck: {}
register: healthcheck_8
ignore_errors: yes
- name: healthcheck (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
healthcheck: {}
register: healthcheck_9
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- healthcheck_1 is changed
- healthcheck_2 is not changed
- healthcheck_3 is changed
- healthcheck_4 is changed
- healthcheck_5 is not changed
- healthcheck_6 is changed
- healthcheck_7 is not changed
- healthcheck_8 is changed
- healthcheck_9 is not changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- healthcheck_1 is failed
- "'Minimum version required' in healthcheck_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.6.0', '<')
###################################################################
## hostname #######################################################
###################################################################
- name: hostname
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
hostname: me.example.com
register: hostname_1
ignore_errors: yes
- name: hostname (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
hostname: me.example.com
register: hostname_2
ignore_errors: yes
- name: hostname (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
hostname: me.example.org
register: hostname_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- hostname_1 is changed
- hostname_2 is not changed
- hostname_3 is changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.2.0', '>=')
- assert:
that:
- hostname_1 is failed
- "'Minimum version required' in hostname_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.2.0', '<')
###################################################################
## hosts ##########################################################
###################################################################
- name: hosts
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
hosts:
example.com: 1.2.3.4
example.org: 4.3.2.1
register: hosts_1
ignore_errors: yes
- name: hosts (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
hosts:
example.com: 1.2.3.4
example.org: 4.3.2.1
register: hosts_2
ignore_errors: yes
- name: hosts (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
hosts:
example.com: 1.2.3.4
register: hosts_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- hosts_1 is changed
- hosts_2 is not changed
- hosts_3 is changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- hosts_1 is failed
- "'Minimum version required' in hosts_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.6.0', '<')
###################################################################
## image ##########################################################
###################################################################
- name: image
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
register: image_1
- name: image (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
register: image_2
- name: image (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.7
register: image_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- image_1 is changed
- image_2 is not changed
- image_3 is changed
####################################################################
## labels ##########################################################
####################################################################
- name: labels
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
labels:
test_1: "1"
test_2: "2"
register: labels_1
- name: labels (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
labels:
test_1: "1"
test_2: "2"
register: labels_2
- name: labels (changes)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
labels:
test_1: "1"
test_2: "2"
test_3: "3"
register: labels_3
- name: labels (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
labels: {}
register: labels_4
- name: labels (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
labels: {}
register: labels_5
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- labels_1 is changed
- labels_2 is not changed
- labels_3 is changed
- labels_4 is changed
- labels_5 is not changed
###################################################################
## mode ###########################################################
###################################################################
- name: mode
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mode: "replicated"
replicas: 1
register: mode_1
- name: mode (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mode: "replicated"
replicas: 1
register: mode_2
- name: mode (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mode: "global"
replicas: 1
register: mode_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mode_1 is changed
- mode_2 is not changed
- mode_3 is changed
####################################################################
## stop_grace_period ###############################################
####################################################################
- name: stop_grace_period
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
stop_grace_period: 60s
register: stop_grace_period_1
- name: stop_grace_period (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
stop_grace_period: 60s
register: stop_grace_period_2
- name: stop_grace_period (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
stop_grace_period: 1m30s
register: stop_grace_period_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- stop_grace_period_1 is changed
- stop_grace_period_2 is not changed
- stop_grace_period_3 is changed
####################################################################
## stop_signal #####################################################
####################################################################
- name: stop_signal
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
stop_signal: "30"
register: stop_signal_1
ignore_errors: yes
- name: stop_signal (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
stop_signal: "30"
register: stop_signal_2
ignore_errors: yes
- name: stop_signal (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
stop_signal: "9"
register: stop_signal_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- stop_signal_1 is changed
- stop_signal_2 is not changed
- stop_signal_3 is changed
when: docker_api_version is version('1.28', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- stop_signal_1 is failed
- "'Minimum version required' in stop_signal_1.msg"
when: docker_api_version is version('1.28', '<') or docker_py_version is version('2.6.0', '<')
####################################################################
## publish #########################################################
####################################################################
- name: publish
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish:
- protocol: tcp
published_port: 60001
target_port: 60001
- protocol: udp
published_port: 60002
target_port: 60002
register: publish_1
ignore_errors: yes
- name: publish (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish:
- protocol: udp
published_port: 60002
target_port: 60002
- published_port: 60001
target_port: 60001
register: publish_2
ignore_errors: yes
- name: publish (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish:
- protocol: tcp
published_port: 60002
target_port: 60003
- protocol: udp
published_port: 60001
target_port: 60001
register: publish_3
ignore_errors: yes
- name: publish (mode)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish:
- protocol: tcp
published_port: 60002
target_port: 60003
mode: host
- protocol: udp
published_port: 60001
target_port: 60001
mode: host
register: publish_4
ignore_errors: yes
- name: publish (mode idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish:
- protocol: udp
published_port: 60001
target_port: 60001
mode: host
- protocol: tcp
published_port: 60002
target_port: 60003
mode: host
register: publish_5
ignore_errors: yes
- name: publish (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish: []
register: publish_6
ignore_errors: yes
- name: publish (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish: []
register: publish_7
ignore_errors: yes
- name: publish (publishes the same port with both protocols)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
publish:
- protocol: udp
published_port: 60001
target_port: 60001
mode: host
- protocol: tcp
published_port: 60001
target_port: 60001
mode: host
register: publish_8
ignore_errors: yes
- name: gather service info
docker_swarm_service_info:
name: "{{ service_name }}"
register: publish_8_info
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- publish_1 is changed
- publish_2 is not changed
- publish_3 is changed
- publish_4 is changed
- publish_5 is not changed
- publish_6 is changed
- publish_7 is not changed
- publish_8 is changed
- (publish_8_info.service.Endpoint.Ports | length) == 2
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('3.0.0', '>=')
- assert:
that:
- publish_1 is failed
- "'Minimum version required' in publish_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('3.0.0', '<')
###################################################################
## read_only ######################################################
###################################################################
- name: read_only
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
read_only: true
register: read_only_1
ignore_errors: yes
- name: read_only (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
read_only: true
register: read_only_2
ignore_errors: yes
- name: read_only (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
read_only: false
register: read_only_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- read_only_1 is changed
- read_only_2 is not changed
- read_only_3 is changed
when: docker_api_version is version('1.28', '>=') and docker_py_version is version('2.6.0', '>=')
- assert:
that:
- read_only_1 is failed
- "'Minimum version required' in read_only_1.msg"
when: docker_api_version is version('1.28', '<') or docker_py_version is version('2.6.0', '<')
###################################################################
## replicas #######################################################
###################################################################
- name: replicas
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
replicas: 2
register: replicas_1
- name: replicas (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
replicas: 2
register: replicas_2
- name: replicas (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
replicas: 3
register: replicas_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- replicas_1 is changed
- replicas_2 is not changed
- replicas_3 is changed
###################################################################
# resolve_image ###################################################
###################################################################
- name: resolve_image (false)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
command: '/bin/sh -v -c "sleep 10m"'
resolve_image: false
register: resolve_image_1
- name: resolve_image (false idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
command: '/bin/sh -v -c "sleep 10m"'
resolve_image: false
register: resolve_image_2
- name: resolve_image (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
command: '/bin/sh -v -c "sleep 10m"'
resolve_image: true
register: resolve_image_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- resolve_image_1 is changed
- resolve_image_2 is not changed
- resolve_image_3 is changed
when: docker_api_version is version('1.30', '>=') and docker_py_version is version('3.2.0', '>=')
- assert:
that:
- resolve_image_1 is changed
- resolve_image_2 is not changed
- resolve_image_3 is failed
- "('version is ' ~ docker_py_version ~ ' ') in resolve_image_3.msg"
- "'Minimum version required is 3.2.0 ' in resolve_image_3.msg"
when: docker_api_version is version('1.30', '<') or docker_py_version is version('3.2.0', '<')
###################################################################
# tty #############################################################
###################################################################
- name: tty
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
tty: yes
register: tty_1
ignore_errors: yes
- name: tty (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
tty: yes
register: tty_2
ignore_errors: yes
- name: tty (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
tty: no
register: tty_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- tty_1 is changed
- tty_2 is not changed
- tty_3 is changed
when: docker_api_version is version('1.25', '>=') and docker_py_version is version('2.4.0', '>=')
- assert:
that:
- tty_1 is failed
- "'Minimum version required' in tty_1.msg"
when: docker_api_version is version('1.25', '<') or docker_py_version is version('2.4.0', '<')
###################################################################
## user ###########################################################
###################################################################
- name: user
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
user: "operator"
register: user_1
- name: user (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
user: "operator"
register: user_2
- name: user (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
user: "root"
register: user_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- user_1 is changed
- user_2 is not changed
- user_3 is changed
####################################################################
## working_dir #####################################################
####################################################################
- name: working_dir
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
working_dir: /tmp
register: working_dir_1
- name: working_dir (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
working_dir: /tmp
register: working_dir_2
- name: working_dir (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
working_dir: /
register: working_dir_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- working_dir_1 is changed
- working_dir_2 is not changed
- working_dir_3 is changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,110 |
vyos_config: set paramter hardcoded
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It looks like the `vyos_config` hardcodes the `set` parameter.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vyos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = None
configured module search path = ['/home/itev.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/itev/.local/lib/python3.7/site-packages/ansible
executable location = /home/itev/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
none
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target VyOS instance runs VyOS 1.2 (codename "Crux")
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
task file:
```yaml
---
- name: Ensuring interface configuration is up-to-date
vyos_config:
save: yes
src: "interface.j2"
when: vyos_interfaces is defined
```
interface.j2:
```jinja2
{% for interface in vyos_interfaces %}
set interfaces ethernet {{ interface.name }} address {{ interface.address }}
set interfaces ethernet {{ interface.name }} description '{{ interface.description | default('Interface configured by Ansible.') }}'
set interfaces ethernet {{ interface.name }} speed {{ interface.speed }}
set interfaces ethernet {{ interface.name }} duplex {{ interface.duplex }}
{% if interface.disabled is defined and interface.disabled == "yes" %}
set interface ethernet {{ interface.name }} disable
{% endif %}
{% endfor %}
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible executes the commands in the jinja2 template, without adding a parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible adds a extra "set" parameter:
<!--- Paste verbatim command output between quotes -->
```paste belowfatal: [vyos01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": false,
"backup_options": null,
"comment": "configured by vyos_config",
"config": null,
"host": null,
"lines": null,
"match": "line",
"password": null,
"port": null,
"provider": null,
"save": true,
"src": " set interfaces ethernet eth0 address dhcp\n set interfaces ethernet eth0 description 'WAN Interface'\n set interfaces ethernet eth0 speed 1000\n set interfaces ethernet eth0 duplex full\n set interfaces ethernet eth1 address 172.16.1.1/24\n set interfaces ethernet eth1 description 'LAN Interface'\n set interfaces ethernet eth1 speed 1000\n set interfaces ethernet eth1 duplex full\n \n",
"ssh_keyfile": null,
"timeout": null,
"username": null
}
},
"msg": "set set interfaces ethernet eth0 address dhcp\r\n\r\n Configuration path: [set] is not valid\r\n Set failed\r\n\r\n[edit]\r\r\nvyos@vyos# "
}
```
|
https://github.com/ansible/ansible/issues/66110
|
https://github.com/ansible/ansible/pull/66122
|
5493ea590cc7cd6b090732bde1df5da1c99aa59e
|
2f996fc6e222eeaf6c3391ad437dadf8ad642b36
| 2019-12-29T16:07:32Z |
python
| 2020-01-03T12:05:37Z |
lib/ansible/modules/network/vyos/vyos_config.py
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: vyos_config
version_added: "2.2"
author: "Nathaniel Case (@Qalthos)"
short_description: Manage VyOS configuration on remote device
description:
- This module provides configuration file management of VyOS
devices. It provides arguments for managing both the
configuration file and state of the active configuration. All
configuration statements are based on `set` and `delete` commands
in the device configuration.
extends_documentation_fragment: vyos
notes:
- Tested against VyOS 1.1.8 (helium).
- This module works with connection C(network_cli). See L(the VyOS OS Platform Options,../network/user_guide/platform_vyos.html).
options:
lines:
description:
- The ordered set of configuration lines to be managed and
compared with the existing configuration on the remote
device.
src:
description:
- The C(src) argument specifies the path to the source config
file to load. The source config file can either be in
bracket format or set format. The source file can include
Jinja2 template variables.
match:
description:
- The C(match) argument controls the method used to match
against the current active configuration. By default, the
desired config is matched against the active config and the
deltas are loaded. If the C(match) argument is set to C(none)
the active configuration is ignored and the configuration is
always loaded.
default: line
choices: ['line', 'none']
backup:
description:
- The C(backup) argument will backup the current devices active
configuration to the Ansible control host prior to making any
changes. If the C(backup_options) value is not given, the
backup file will be located in the backup folder in the playbook
root directory or role root directory, if playbook is part of an
ansible role. If the directory does not exist, it is created.
type: bool
default: 'no'
comment:
description:
- Allows a commit description to be specified to be included
when the configuration is committed. If the configuration is
not changed or committed, this argument is ignored.
default: 'configured by vyos_config'
config:
description:
- The C(config) argument specifies the base configuration to use
to compare against the desired configuration. If this value
is not specified, the module will automatically retrieve the
current active configuration from the remote device.
save:
description:
- The C(save) argument controls whether or not changes made
to the active configuration are saved to disk. This is
independent of committing the config. When set to True, the
active configuration is saved.
type: bool
default: 'no'
backup_options:
description:
- This is a dict object containing configurable options related to backup file path.
The value of this option is read only when C(backup) is set to I(yes), if C(backup) is set
to I(no) this option will be silently ignored.
suboptions:
filename:
description:
- The filename to be used to store the backup configuration. If the the filename
is not given it will be generated based on the hostname, current time and date
in format defined by <hostname>_config.<current-date>@<current-time>
dir_path:
description:
- This option provides the path ending with directory name in which the backup
configuration file will be stored. If the directory does not exist it will be first
created and the filename is either the value of C(filename) or default filename
as described in C(filename) options description. If the path value is not given
in that case a I(backup) directory will be created in the current working directory
and backup configuration will be copied in C(filename) within I(backup) directory.
type: path
type: dict
version_added: "2.8"
"""
EXAMPLES = """
- name: configure the remote device
vyos_config:
lines:
- set system host-name {{ inventory_hostname }}
- set service lldp
- delete service dhcp-server
- name: backup and load from file
vyos_config:
src: vyos.cfg
backup: yes
- name: render a Jinja2 template onto the VyOS router
vyos_config:
src: vyos_template.j2
- name: for idempotency, use full-form commands
vyos_config:
lines:
# - set int eth eth2 description 'OUTSIDE'
- set interface ethernet eth2 description 'OUTSIDE'
- name: configurable backup path
vyos_config:
backup: yes
backup_options:
filename: backup.cfg
dir_path: /home/user
"""
RETURN = """
commands:
description: The list of configuration commands sent to the device
returned: always
type: list
sample: ['...', '...']
filtered:
description: The list of configuration commands removed to avoid a load failure
returned: always
type: list
sample: ['...', '...']
backup_path:
description: The full path to the backup file
returned: when backup is yes
type: str
sample: /playbooks/ansible/backup/vyos_config.2016-07-16@22:28:34
filename:
description: The name of the backup file
returned: when backup is yes and filename is not specified in backup options
type: str
sample: vyos_config.2016-07-16@22:28:34
shortname:
description: The full path to the backup file excluding the timestamp
returned: when backup is yes and filename is not specified in backup options
type: str
sample: /playbooks/ansible/backup/vyos_config
date:
description: The date extracted from the backup file name
returned: when backup is yes
type: str
sample: "2016-07-16"
time:
description: The time extracted from the backup file name
returned: when backup is yes
type: str
sample: "22:28:34"
"""
import re
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import ConnectionError
from ansible.module_utils.network.vyos.vyos import load_config, get_config, run_commands
from ansible.module_utils.network.vyos.vyos import vyos_argument_spec, get_connection
DEFAULT_COMMENT = 'configured by vyos_config'
CONFIG_FILTERS = [
re.compile(r'set system login user \S+ authentication encrypted-password')
]
def get_candidate(module):
contents = module.params['src'] or module.params['lines']
if module.params['src']:
contents = format_commands(contents.splitlines())
contents = '\n'.join(contents)
return contents
def format_commands(commands):
return [line for line in commands if len(line.strip()) > 0]
def diff_config(commands, config):
config = [str(c).replace("'", '') for c in config.splitlines()]
updates = list()
visited = set()
for line in commands:
item = str(line).replace("'", '')
if not item.startswith('set') and not item.startswith('delete'):
raise ValueError('line must start with either `set` or `delete`')
elif item.startswith('set') and item not in config:
updates.append(line)
elif item.startswith('delete'):
if not config:
updates.append(line)
else:
item = re.sub(r'delete', 'set', item)
for entry in config:
if entry.startswith(item) and line not in visited:
updates.append(line)
visited.add(line)
return list(updates)
def sanitize_config(config, result):
result['filtered'] = list()
index_to_filter = list()
for regex in CONFIG_FILTERS:
for index, line in enumerate(list(config)):
if regex.search(line):
result['filtered'].append(line)
index_to_filter.append(index)
# Delete all filtered configs
for filter_index in sorted(index_to_filter, reverse=True):
del config[filter_index]
def run(module, result):
# get the current active config from the node or passed in via
# the config param
config = module.params['config'] or get_config(module)
# create the candidate config object from the arguments
candidate = get_candidate(module)
# create loadable config that includes only the configuration updates
connection = get_connection(module)
try:
response = connection.get_diff(candidate=candidate, running=config, diff_match=module.params['match'])
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
commands = response.get('config_diff')
sanitize_config(commands, result)
result['commands'] = commands
commit = not module.check_mode
comment = module.params['comment']
diff = None
if commands:
diff = load_config(module, commands, commit=commit, comment=comment)
if result.get('filtered'):
result['warnings'].append('Some configuration commands were '
'removed, please see the filtered key')
result['changed'] = True
if module._diff:
result['diff'] = {'prepared': diff}
def main():
backup_spec = dict(
filename=dict(),
dir_path=dict(type='path')
)
argument_spec = dict(
src=dict(type='path'),
lines=dict(type='list'),
match=dict(default='line', choices=['line', 'none']),
comment=dict(default=DEFAULT_COMMENT),
config=dict(),
backup=dict(type='bool', default=False),
backup_options=dict(type='dict', options=backup_spec),
save=dict(type='bool', default=False),
)
argument_spec.update(vyos_argument_spec)
mutually_exclusive = [('lines', 'src')]
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True
)
warnings = list()
result = dict(changed=False, warnings=warnings)
if module.params['backup']:
result['__backup__'] = get_config(module=module)
if any((module.params['src'], module.params['lines'])):
run(module, result)
if module.params['save']:
diff = run_commands(module, commands=['configure', 'compare saved'])[1]
if diff != '[edit]':
run_commands(module, commands=['save'])
result['changed'] = True
run_commands(module, commands=['exit'])
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,110 |
vyos_config: set paramter hardcoded
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It looks like the `vyos_config` hardcodes the `set` parameter.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vyos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = None
configured module search path = ['/home/itev.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/itev/.local/lib/python3.7/site-packages/ansible
executable location = /home/itev/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
none
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target VyOS instance runs VyOS 1.2 (codename "Crux")
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
task file:
```yaml
---
- name: Ensuring interface configuration is up-to-date
vyos_config:
save: yes
src: "interface.j2"
when: vyos_interfaces is defined
```
interface.j2:
```jinja2
{% for interface in vyos_interfaces %}
set interfaces ethernet {{ interface.name }} address {{ interface.address }}
set interfaces ethernet {{ interface.name }} description '{{ interface.description | default('Interface configured by Ansible.') }}'
set interfaces ethernet {{ interface.name }} speed {{ interface.speed }}
set interfaces ethernet {{ interface.name }} duplex {{ interface.duplex }}
{% if interface.disabled is defined and interface.disabled == "yes" %}
set interface ethernet {{ interface.name }} disable
{% endif %}
{% endfor %}
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible executes the commands in the jinja2 template, without adding a parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible adds a extra "set" parameter:
<!--- Paste verbatim command output between quotes -->
```paste belowfatal: [vyos01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": false,
"backup_options": null,
"comment": "configured by vyos_config",
"config": null,
"host": null,
"lines": null,
"match": "line",
"password": null,
"port": null,
"provider": null,
"save": true,
"src": " set interfaces ethernet eth0 address dhcp\n set interfaces ethernet eth0 description 'WAN Interface'\n set interfaces ethernet eth0 speed 1000\n set interfaces ethernet eth0 duplex full\n set interfaces ethernet eth1 address 172.16.1.1/24\n set interfaces ethernet eth1 description 'LAN Interface'\n set interfaces ethernet eth1 speed 1000\n set interfaces ethernet eth1 duplex full\n \n",
"ssh_keyfile": null,
"timeout": null,
"username": null
}
},
"msg": "set set interfaces ethernet eth0 address dhcp\r\n\r\n Configuration path: [set] is not valid\r\n Set failed\r\n\r\n[edit]\r\r\nvyos@vyos# "
}
```
|
https://github.com/ansible/ansible/issues/66110
|
https://github.com/ansible/ansible/pull/66122
|
5493ea590cc7cd6b090732bde1df5da1c99aa59e
|
2f996fc6e222eeaf6c3391ad437dadf8ad642b36
| 2019-12-29T16:07:32Z |
python
| 2020-01-03T12:05:37Z |
test/integration/targets/vyos_config/tests/cli/config.cfg
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,110 |
vyos_config: set paramter hardcoded
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It looks like the `vyos_config` hardcodes the `set` parameter.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vyos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = None
configured module search path = ['/home/itev.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/itev/.local/lib/python3.7/site-packages/ansible
executable location = /home/itev/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
none
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target VyOS instance runs VyOS 1.2 (codename "Crux")
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
task file:
```yaml
---
- name: Ensuring interface configuration is up-to-date
vyos_config:
save: yes
src: "interface.j2"
when: vyos_interfaces is defined
```
interface.j2:
```jinja2
{% for interface in vyos_interfaces %}
set interfaces ethernet {{ interface.name }} address {{ interface.address }}
set interfaces ethernet {{ interface.name }} description '{{ interface.description | default('Interface configured by Ansible.') }}'
set interfaces ethernet {{ interface.name }} speed {{ interface.speed }}
set interfaces ethernet {{ interface.name }} duplex {{ interface.duplex }}
{% if interface.disabled is defined and interface.disabled == "yes" %}
set interface ethernet {{ interface.name }} disable
{% endif %}
{% endfor %}
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible executes the commands in the jinja2 template, without adding a parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible adds a extra "set" parameter:
<!--- Paste verbatim command output between quotes -->
```paste belowfatal: [vyos01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": false,
"backup_options": null,
"comment": "configured by vyos_config",
"config": null,
"host": null,
"lines": null,
"match": "line",
"password": null,
"port": null,
"provider": null,
"save": true,
"src": " set interfaces ethernet eth0 address dhcp\n set interfaces ethernet eth0 description 'WAN Interface'\n set interfaces ethernet eth0 speed 1000\n set interfaces ethernet eth0 duplex full\n set interfaces ethernet eth1 address 172.16.1.1/24\n set interfaces ethernet eth1 description 'LAN Interface'\n set interfaces ethernet eth1 speed 1000\n set interfaces ethernet eth1 duplex full\n \n",
"ssh_keyfile": null,
"timeout": null,
"username": null
}
},
"msg": "set set interfaces ethernet eth0 address dhcp\r\n\r\n Configuration path: [set] is not valid\r\n Set failed\r\n\r\n[edit]\r\r\nvyos@vyos# "
}
```
|
https://github.com/ansible/ansible/issues/66110
|
https://github.com/ansible/ansible/pull/66122
|
5493ea590cc7cd6b090732bde1df5da1c99aa59e
|
2f996fc6e222eeaf6c3391ad437dadf8ad642b36
| 2019-12-29T16:07:32Z |
python
| 2020-01-03T12:05:37Z |
test/integration/targets/vyos_config/tests/cli/simple.yaml
|
---
- debug: msg="START cli/simple.yaml on connection={{ ansible_connection }}"
- name: setup
vyos_config:
lines: set system host-name {{ inventory_hostname_short }}
match: none
- name: configure simple config command
vyos_config:
lines: set system host-name foo
register: result
- assert:
that:
- "result.changed == true"
- "'set system host-name foo' in result.commands"
- name: check simple config command idempontent
vyos_config:
lines: set system host-name foo
register: result
- assert:
that:
- "result.changed == false"
- name: teardown
vyos_config:
lines: set system host-name {{ inventory_hostname_short }}
match: none
- debug: msg="END cli/simple.yaml on connection={{ ansible_connection }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,110 |
vyos_config: set paramter hardcoded
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It looks like the `vyos_config` hardcodes the `set` parameter.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vyos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = None
configured module search path = ['/home/itev.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/itev/.local/lib/python3.7/site-packages/ansible
executable location = /home/itev/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
none
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target VyOS instance runs VyOS 1.2 (codename "Crux")
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
task file:
```yaml
---
- name: Ensuring interface configuration is up-to-date
vyos_config:
save: yes
src: "interface.j2"
when: vyos_interfaces is defined
```
interface.j2:
```jinja2
{% for interface in vyos_interfaces %}
set interfaces ethernet {{ interface.name }} address {{ interface.address }}
set interfaces ethernet {{ interface.name }} description '{{ interface.description | default('Interface configured by Ansible.') }}'
set interfaces ethernet {{ interface.name }} speed {{ interface.speed }}
set interfaces ethernet {{ interface.name }} duplex {{ interface.duplex }}
{% if interface.disabled is defined and interface.disabled == "yes" %}
set interface ethernet {{ interface.name }} disable
{% endif %}
{% endfor %}
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible executes the commands in the jinja2 template, without adding a parameter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible adds a extra "set" parameter:
<!--- Paste verbatim command output between quotes -->
```paste belowfatal: [vyos01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": false,
"backup_options": null,
"comment": "configured by vyos_config",
"config": null,
"host": null,
"lines": null,
"match": "line",
"password": null,
"port": null,
"provider": null,
"save": true,
"src": " set interfaces ethernet eth0 address dhcp\n set interfaces ethernet eth0 description 'WAN Interface'\n set interfaces ethernet eth0 speed 1000\n set interfaces ethernet eth0 duplex full\n set interfaces ethernet eth1 address 172.16.1.1/24\n set interfaces ethernet eth1 description 'LAN Interface'\n set interfaces ethernet eth1 speed 1000\n set interfaces ethernet eth1 duplex full\n \n",
"ssh_keyfile": null,
"timeout": null,
"username": null
}
},
"msg": "set set interfaces ethernet eth0 address dhcp\r\n\r\n Configuration path: [set] is not valid\r\n Set failed\r\n\r\n[edit]\r\r\nvyos@vyos# "
}
```
|
https://github.com/ansible/ansible/issues/66110
|
https://github.com/ansible/ansible/pull/66122
|
5493ea590cc7cd6b090732bde1df5da1c99aa59e
|
2f996fc6e222eeaf6c3391ad437dadf8ad642b36
| 2019-12-29T16:07:32Z |
python
| 2020-01-03T12:05:37Z |
test/units/modules/network/vyos/fixtures/vyos_config_src.cfg
|
set system host-name foo
delete interfaces ethernet eth0 address
set interfaces ethernet eth1 address '6.7.8.9/24'
set interfaces ethernet eth1 description 'test string'
set interfaces ethernet eth1 disable
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,189 |
Error using hostname module on OSCM-distribution
|
##### SUMMARY
Applying the module HOSTNAME against a box running OSMC (https://osmc.tv/) the following error appears:
`fatal: [raspitest.thucke.de]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Osmc)"}
`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Module: hostname
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /home/ansible/ansible/ansible.cfg
configured module search path = ['/home/ansible/ansible/ansible_modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
DEFAULT_HOST_LIST(/home/ansible/ansible/ansible.cfg) = ['/home/ansible/ansible/inventory.yml']
DEFAULT_MODULE_PATH(/home/ansible/ansible/ansible.cfg) = ['/home/ansible/ansible/ansible_modules']
DEFAULT_ROLES_PATH(/home/ansible/ansible/ansible.cfg) = ['/home/ansible/ansible/roles', '/home/ansible/ansible/roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: OSMC on Rasperry Pi 2B+ (http://download.osmc.tv/installers/diskimages/OSMC_TGT_rbp2_20191118.img.gz)
##### STEPS TO REPRODUCE
Just apply a playbook with the module against such a machine.
```
- name: HOSTNAME | Setting real hostname
hostname:
name: "{{ inetHostname.stdout }}"
```
##### EXPECTED RESULTS
Module end without errors
##### ACTUAL RESULTS
`fatal: [raspitest.thucke.de]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Osmc)"}
`
|
https://github.com/ansible/ansible/issues/66189
|
https://github.com/ansible/ansible/pull/66190
|
193879d462013433c5a60e7b16ef724723457f3c
|
d56d0f97e30d728b6ea54149001364afb39c5386
| 2020-01-04T17:02:40Z |
python
| 2020-01-05T07:07:39Z |
changelogs/fragments/66189-hostname-osmc.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,189 |
Error using hostname module on OSCM-distribution
|
##### SUMMARY
Applying the module HOSTNAME against a box running OSMC (https://osmc.tv/) the following error appears:
`fatal: [raspitest.thucke.de]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Osmc)"}
`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Module: hostname
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /home/ansible/ansible/ansible.cfg
configured module search path = ['/home/ansible/ansible/ansible_modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
DEFAULT_HOST_LIST(/home/ansible/ansible/ansible.cfg) = ['/home/ansible/ansible/inventory.yml']
DEFAULT_MODULE_PATH(/home/ansible/ansible/ansible.cfg) = ['/home/ansible/ansible/ansible_modules']
DEFAULT_ROLES_PATH(/home/ansible/ansible/ansible.cfg) = ['/home/ansible/ansible/roles', '/home/ansible/ansible/roles']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: OSMC on Rasperry Pi 2B+ (http://download.osmc.tv/installers/diskimages/OSMC_TGT_rbp2_20191118.img.gz)
##### STEPS TO REPRODUCE
Just apply a playbook with the module against such a machine.
```
- name: HOSTNAME | Setting real hostname
hostname:
name: "{{ inetHostname.stdout }}"
```
##### EXPECTED RESULTS
Module end without errors
##### ACTUAL RESULTS
`fatal: [raspitest.thucke.de]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Osmc)"}
`
|
https://github.com/ansible/ansible/issues/66189
|
https://github.com/ansible/ansible/pull/66190
|
193879d462013433c5a60e7b16ef724723457f3c
|
d56d0f97e30d728b6ea54149001364afb39c5386
| 2020-01-04T17:02:40Z |
python
| 2020-01-05T07:07:39Z |
lib/ansible/modules/system/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname, supports most OSs/Distributions, including those using systemd.
- Note, this module does *NOT* modify C(/etc/hosts). You need to modify it yourself using other modules like template or replace.
- Windows, HP-UX and AIX are not currently supported.
options:
name:
description:
- Name of the host
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, specially with containers as they can present misleading information.
choices: ['generic', 'debian','sles', 'redhat', 'alpine', 'systemd', 'openrc', 'openbsd', 'solaris', 'freebsd']
version_added: '2.9'
'''
EXAMPLES = '''
- hostname:
name: web01
'''
import os
import platform
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils._text import to_native
STRATS = {'generic': 'Generic', 'debian': 'Debian', 'sles': 'SLES', 'redhat': 'RedHat', 'alpine': 'Alpine',
'systemd': 'Systemd', 'openrc': 'OpenRC', 'openbsd': 'OpenBSD', 'solaris': 'Solaris', 'freebsd': 'FreeBSD'}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class GenericStrategy(object):
"""
This is a generic Hostname manipulation strategy class.
A subclass may wish to override some or all of these methods.
- get_current_hostname()
- get_permanent_hostname()
- set_current_hostname(name)
- set_permanent_hostname(name)
"""
def __init__(self, module):
self.module = module
self.changed = False
self.hostname_cmd = self.module.get_bin_path('hostnamectl', False)
if not self.hostname_cmd:
self.hostname_cmd = self.module.get_bin_path('hostname', True)
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class DebianStrategy(GenericStrategy):
"""
This is a Debian family Hostname manipulation strategy class - it edits
the /etc/hostname file.
"""
HOSTNAME_FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SLESStrategy(GenericStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
HOSTNAME_FILE = '/etc/HOSTNAME'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class RedHatStrategy(GenericStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
finally:
f.close()
if not found:
lines.append("HOSTNAME=%s\n" % name)
f = open(self.NETWORK_FILE, 'w+')
try:
f.writelines(lines)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class AlpineStrategy(GenericStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
HOSTNAME_FILE = '/etc/hostname'
def update_current_and_permanent_hostname(self):
self.update_permanent_hostname()
self.update_current_hostname()
return self.changed
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, '-F', self.HOSTNAME_FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(GenericStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
def get_current_hostname(self):
cmd = [self.hostname_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostname_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostname_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(GenericStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class OpenBSDStrategy(GenericStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
HOSTNAME_FILE = '/etc/myname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SolarisStrategy(GenericStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(GenericStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/rc.conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("hostname=temporarystub\n")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class OpenSUSETumbleweedHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-tumbleweed'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class ManjaroHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = DebianStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = DebianStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = DebianStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = DebianStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = DebianStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = DebianStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = DebianStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = DebianStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = DebianStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = DebianStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,185 |
vyos_config: Testcase is failing
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
vyos_config testcase is failing - https://app.shippable.com/github/ansible/ansible/runs/154844/60/console.
This change is recently added via - https://github.com/ansible/ansible/pull/66122
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
test/integration/targets/vyos_config/tests/cli/simple.yaml
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
NA
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
https://app.shippable.com/github/ansible/ansible/runs/154844/60/console
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Test should pass
##### ACTUAL RESULTS
Test is failing
|
https://github.com/ansible/ansible/issues/66185
|
https://github.com/ansible/ansible/pull/66188
|
6b1dff64d0958e80c9b63b1e971dd67d00aa8c1d
|
8ae5e34127216bf56544ccfd54cc48979741ad48
| 2020-01-04T07:00:01Z |
python
| 2020-01-06T10:50:36Z |
test/integration/targets/vyos_config/tests/cli/simple.yaml
|
---
- debug: msg="START cli/simple.yaml on connection={{ ansible_connection }}"
- name: setup
vyos_config:
lines: set system host-name {{ inventory_hostname_short }}
match: none
- name: configure simple config command
vyos_config:
lines: set system host-name foo
register: result
- assert:
that:
- "result.changed == true"
- "'set system host-name foo' in result.commands"
- name: check simple config command idempontent
vyos_config:
lines: set system host-name foo
register: result
- assert:
that:
- "result.changed == false"
- name: Configuring when commands starts with whitespaces
vyos_config:
src: "{{ role_path }}/tests/cli/config.cfg"
register: result
- assert:
that:
- "result.changed == true"
- '"set service lldp" in result.commands'
- '"set protocols static" in result.commands'
- name: Delete description
vyos_config:
lines:
- delete service lldp
- delete protocols static
register: result
- assert:
that:
- "result.changed == true"
- '"delete service lldp" in result.commands'
- '"delete protocols static" in result.commands'
- name: teardown
vyos_config:
lines: set system host-name {{ inventory_hostname_short }}
match: none
- debug: msg="END cli/simple.yaml on connection={{ ansible_connection }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,127 |
set_options method not called for collections callback
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
It seems that the `set_options` method is not called when a callback is from a collection.
In `task_queue_manager.py` in `load_callbacks`, the loop handling collection callback is not calling `callback_obj.set_options()` while it is performed for the other callback source.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
task_queue_manager.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /Users/remirey/dev/ansible_collections/community/grafana/tmp/ansible.cfg
configured module search path = [u'/Users/remirey/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/remirey/.venv/grafana/lib/python2.7/site-packages/ansible
executable location = /Users/remirey/.venv/grafana/bin/ansible
python version = 2.7.13 (default, Apr 4 2017, 08:47:57) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/Users/remirey/dev/ansible_collections/community/grafana/tmp/ansible.cfg) = [u'community.grafana.grafana_annotations']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See https://github.com/ansible-collections/grafana/issues/30
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error/warning when executing the `grafana_annotations callback from a collection.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[ERROR]: Could not submit message to Grafana: 'CallbackModule' object has no attribute 'grafana_url'
PLAY [localhost] ********************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "nothing to do"
}
PLAY RECAP **************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[WARNING]: Failure using method (v2_playbook_on_stats) in callback plugin (<ansible_collections.community.grafana.plugins.callback.grafana_annotations.CallbackModule object at
0x108d6ef90>): 'CallbackModule' object has no attribute 'dashboard_id'
```
|
https://github.com/ansible/ansible/issues/66127
|
https://github.com/ansible/ansible/pull/66128
|
80dff8743afe478f443ecbd69ed0bee512aaf04c
|
7888eafb820e22672df3a0051e2a1b48af8adf6d
| 2019-12-30T11:25:54Z |
python
| 2020-01-06T16:32:36Z |
changelogs/fragments/66128-fix-callback-set-options.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,127 |
set_options method not called for collections callback
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
It seems that the `set_options` method is not called when a callback is from a collection.
In `task_queue_manager.py` in `load_callbacks`, the loop handling collection callback is not calling `callback_obj.set_options()` while it is performed for the other callback source.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
task_queue_manager.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /Users/remirey/dev/ansible_collections/community/grafana/tmp/ansible.cfg
configured module search path = [u'/Users/remirey/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/remirey/.venv/grafana/lib/python2.7/site-packages/ansible
executable location = /Users/remirey/.venv/grafana/bin/ansible
python version = 2.7.13 (default, Apr 4 2017, 08:47:57) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/Users/remirey/dev/ansible_collections/community/grafana/tmp/ansible.cfg) = [u'community.grafana.grafana_annotations']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
See https://github.com/ansible-collections/grafana/issues/30
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No error/warning when executing the `grafana_annotations callback from a collection.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[ERROR]: Could not submit message to Grafana: 'CallbackModule' object has no attribute 'grafana_url'
PLAY [localhost] ********************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "nothing to do"
}
PLAY RECAP **************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[WARNING]: Failure using method (v2_playbook_on_stats) in callback plugin (<ansible_collections.community.grafana.plugins.callback.grafana_annotations.CallbackModule object at
0x108d6ef90>): 'CallbackModule' object has no attribute 'dashboard_id'
```
|
https://github.com/ansible/ansible/issues/66127
|
https://github.com/ansible/ansible/pull/66128
|
80dff8743afe478f443ecbd69ed0bee512aaf04c
|
7888eafb820e22672df3a0051e2a1b48af8adf6d
| 2019-12-30T11:25:54Z |
python
| 2020-01-06T16:32:36Z |
lib/ansible/executor/task_queue_manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import tempfile
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError
from ansible.executor.play_iterator import PlayIterator
from ansible.executor.stats import AggregateStats
from ansible.executor.task_result import TaskResult
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text, to_native
from ansible.playbook.block import Block
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import callback_loader, strategy_loader, module_loader
from ansible.plugins.callback import CallbackBase
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.helpers import pct_to_int
from ansible.vars.hostvars import HostVars
from ansible.vars.reserved import warn_if_reserved
from ansible.utils.display import Display
from ansible.utils.multiprocessing import context as multiprocessing_context
__all__ = ['TaskQueueManager']
display = Display()
class TaskQueueManager:
'''
This class handles the multiprocessing requirements of Ansible by
creating a pool of worker forks, a result handler fork, and a
manager object with shared datastructures/queues for coordinating
work between all processes.
The queue manager is responsible for loading the play strategy plugin,
which dispatches the Play's tasks to hosts.
'''
RUN_OK = 0
RUN_ERROR = 1
RUN_FAILED_HOSTS = 2
RUN_UNREACHABLE_HOSTS = 4
RUN_FAILED_BREAK_PLAY = 8
RUN_UNKNOWN_ERROR = 255
def __init__(self, inventory, variable_manager, loader, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False, forks=None):
self._inventory = inventory
self._variable_manager = variable_manager
self._loader = loader
self._stats = AggregateStats()
self.passwords = passwords
self._stdout_callback = stdout_callback
self._run_additional_callbacks = run_additional_callbacks
self._run_tree = run_tree
self._forks = forks or 5
self._callbacks_loaded = False
self._callback_plugins = []
self._start_at_done = False
# make sure any module paths (if specified) are added to the module_loader
if context.CLIARGS.get('module_path', False):
for path in context.CLIARGS['module_path']:
if path:
module_loader.add_directory(path)
# a special flag to help us exit cleanly
self._terminated = False
# dictionaries to keep track of failed/unreachable hosts
self._failed_hosts = dict()
self._unreachable_hosts = dict()
try:
self._final_q = multiprocessing_context.Queue()
except OSError as e:
raise AnsibleError("Unable to use multiprocessing, this is normally caused by lack of access to /dev/shm: %s" % to_native(e))
# A temporary file (opened pre-fork) used by connection
# plugins for inter-process locking.
self._connection_lockfile = tempfile.TemporaryFile()
def _initialize_processes(self, num):
self._workers = []
for i in range(num):
self._workers.append(None)
def load_callbacks(self):
'''
Loads all available callbacks, with the exception of those which
utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout',
only one such callback plugin will be loaded.
'''
if self._callbacks_loaded:
return
stdout_callback_loaded = False
if self._stdout_callback is None:
self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK
if isinstance(self._stdout_callback, CallbackBase):
stdout_callback_loaded = True
elif isinstance(self._stdout_callback, string_types):
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
else:
self._stdout_callback = callback_loader.get(self._stdout_callback)
self._stdout_callback.set_options()
stdout_callback_loaded = True
else:
raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin")
for callback_plugin in callback_loader.all(class_only=True):
callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', '')
callback_needs_whitelist = getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False)
(callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path))
if callback_type == 'stdout':
# we only allow one callback of type 'stdout' to be loaded,
if callback_name != self._stdout_callback or stdout_callback_loaded:
continue
stdout_callback_loaded = True
elif callback_name == 'tree' and self._run_tree:
# special case for ansible cli option
pass
elif not self._run_additional_callbacks or (callback_needs_whitelist and (
C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):
# 2.x plugins shipped with ansible should require whitelisting, older or non shipped should load automatically
continue
callback_obj = callback_plugin()
callback_obj.set_options()
self._callback_plugins.append(callback_obj)
for callback_plugin_name in (c for c in C.DEFAULT_CALLBACK_WHITELIST if AnsibleCollectionRef.is_valid_fqcr(c)):
# TODO: need to extend/duplicate the stdout callback check here (and possible move this ahead of the old way
callback_obj = callback_loader.get(callback_plugin_name)
self._callback_plugins.append(callback_obj)
self._callbacks_loaded = True
def run(self, play):
'''
Iterates over the roles/tasks in a play, using the given (or default)
strategy for queueing tasks. The default is the linear strategy, which
operates like classic Ansible by keeping all hosts in lock-step with
a given task (meaning no hosts move on to the next task until all hosts
are done with the current task).
'''
if not self._callbacks_loaded:
self.load_callbacks()
all_vars = self._variable_manager.get_vars(play=play)
warn_if_reserved(all_vars)
templar = Templar(loader=self._loader, variables=all_vars)
new_play = play.copy()
new_play.post_validate(templar)
new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers
self.hostvars = HostVars(
inventory=self._inventory,
variable_manager=self._variable_manager,
loader=self._loader,
)
play_context = PlayContext(new_play, self.passwords, self._connection_lockfile.fileno())
if (self._stdout_callback and
hasattr(self._stdout_callback, 'set_play_context')):
self._stdout_callback.set_play_context(play_context)
for callback_plugin in self._callback_plugins:
if hasattr(callback_plugin, 'set_play_context'):
callback_plugin.set_play_context(play_context)
self.send_callback('v2_playbook_on_play_start', new_play)
# build the iterator
iterator = PlayIterator(
inventory=self._inventory,
play=new_play,
play_context=play_context,
variable_manager=self._variable_manager,
all_vars=all_vars,
start_at_done=self._start_at_done,
)
# adjust to # of workers to configured forks or size of batch, whatever is lower
self._initialize_processes(min(self._forks, iterator.batch_size))
# load the specified strategy (or the default linear one)
strategy = strategy_loader.get(new_play.strategy, self)
if strategy is None:
raise AnsibleError("Invalid play strategy specified: %s" % new_play.strategy, obj=play._ds)
# Because the TQM may survive multiple play runs, we start by marking
# any hosts as failed in the iterator here which may have been marked
# as failed in previous runs. Then we clear the internal list of failed
# hosts so we know what failed this round.
for host_name in self._failed_hosts.keys():
host = self._inventory.get_host(host_name)
iterator.mark_host_failed(host)
self.clear_failed_hosts()
# during initialization, the PlayContext will clear the start_at_task
# field to signal that a matching task was found, so check that here
# and remember it so we don't try to skip tasks on future plays
if context.CLIARGS.get('start_at_task') is not None and play_context.start_at_task is None:
self._start_at_done = True
# and run the play using the strategy and cleanup on way out
play_return = strategy.run(iterator, play_context)
# now re-save the hosts that failed from the iterator to our internal list
for host_name in iterator.get_failed_hosts():
self._failed_hosts[host_name] = True
strategy.cleanup()
self._cleanup_processes()
return play_return
def cleanup(self):
display.debug("RUNNING CLEANUP")
self.terminate()
self._final_q.close()
self._cleanup_processes()
def _cleanup_processes(self):
if hasattr(self, '_workers'):
for worker_prc in self._workers:
if worker_prc and worker_prc.is_alive():
try:
worker_prc.terminate()
except AttributeError:
pass
def clear_failed_hosts(self):
self._failed_hosts = dict()
def get_inventory(self):
return self._inventory
def get_variable_manager(self):
return self._variable_manager
def get_loader(self):
return self._loader
def get_workers(self):
return self._workers[:]
def terminate(self):
self._terminated = True
def has_dead_workers(self):
# [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,
# <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>
defunct = False
for x in self._workers:
if getattr(x, 'exitcode', None):
defunct = True
return defunct
def send_callback(self, method_name, *args, **kwargs):
for callback_plugin in [self._stdout_callback] + self._callback_plugins:
# a plugin that set self.disabled to True will not be called
# see osx_say.py example for such a plugin
if getattr(callback_plugin, 'disabled', False):
continue
# try to find v2 method, fallback to v1 method, ignore callback if no method found
methods = []
for possible in [method_name, 'v2_on_any']:
gotit = getattr(callback_plugin, possible, None)
if gotit is None:
gotit = getattr(callback_plugin, possible.replace('v2_', ''), None)
if gotit is not None:
methods.append(gotit)
# send clean copies
new_args = []
for arg in args:
# FIXME: add play/task cleaners
if isinstance(arg, TaskResult):
new_args.append(arg.clean_copy())
# elif isinstance(arg, Play):
# elif isinstance(arg, Task):
else:
new_args.append(arg)
for method in methods:
try:
method(*new_args, **kwargs)
except Exception as e:
# TODO: add config toggle to make this fatal or not?
display.warning(u"Failure using method (%s) in callback plugin (%s): %s" % (to_text(method_name), to_text(callback_plugin), to_text(e)))
from traceback import format_tb
from sys import exc_info
display.vvv('Callback Exception: \n' + ' '.join(format_tb(exc_info()[2])))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,993 |
docker_container: Always recreates the container when restart_policy is changed
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The [docker_container](https://docs.ansible.com/ansible/latest/modules/docker_container_module.html) module recreates a container when the `restart_policy` is changed. This can be changed using the `docker update` command.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
[docker_container](https://docs.ansible.com/ansible/latest/modules/docker_container_module.html)
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```shell
$ ansible --version
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ingmar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```shell
$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 30 (Thirty)
Release: 30
Codename: Thirty
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the following playbook:
```sh
$ ansible-playbook docker-container-update.yml
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Container update restart_policy
hosts: localhost
tasks:
- name: Container with restart_policy
docker_container:
name: restart_policy_example
image: alpine
state: present
restart_policy: unless-stopped
entrypoint: top
```
2. Change the `restart_policy` to `always` and run it again.
```sh
$ ansible-playbook docker-container-update.yml
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Container update restart_policy
hosts: localhost
tasks:
- name: Container with restart_policy
docker_container:
name: restart_policy_example
image: alpine
state: present
restart_policy: unless-stopped
entrypoint: top
```
3. Notice by using the `docker ps` command that the container is recreated instead of updated.
##### EXPECTED RESULTS
I expect the container to be update with the new `restart_policy` by using the docker update command:
```sh
$ docker update --restart=<new policy>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The container is recreated instead of updated.
I currently run a `command` task before the container which updates the command.
```yml
# Update docker restart policy
- name: Update docker container restart policy
command: "docker update --restart={{ (restart_policy == false) | ternary('no', restart_policy) }} {{ container }}"
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/65993
|
https://github.com/ansible/ansible/pull/66192
|
ec34235e2ec30e74d6ee2967ea5ac8330bbb7d97
|
02c126f5ee451a7dc1b442d15a883bc3f86505d1
| 2019-12-20T10:11:48Z |
python
| 2020-01-06T19:49:48Z |
changelogs/fragments/65993-restart-docker_container-on-restart-policy-updates.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,993 |
docker_container: Always recreates the container when restart_policy is changed
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The [docker_container](https://docs.ansible.com/ansible/latest/modules/docker_container_module.html) module recreates a container when the `restart_policy` is changed. This can be changed using the `docker update` command.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
[docker_container](https://docs.ansible.com/ansible/latest/modules/docker_container_module.html)
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```shell
$ ansible --version
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ingmar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
<empty>
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```shell
$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 30 (Thirty)
Release: 30
Codename: Thirty
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the following playbook:
```sh
$ ansible-playbook docker-container-update.yml
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Container update restart_policy
hosts: localhost
tasks:
- name: Container with restart_policy
docker_container:
name: restart_policy_example
image: alpine
state: present
restart_policy: unless-stopped
entrypoint: top
```
2. Change the `restart_policy` to `always` and run it again.
```sh
$ ansible-playbook docker-container-update.yml
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Container update restart_policy
hosts: localhost
tasks:
- name: Container with restart_policy
docker_container:
name: restart_policy_example
image: alpine
state: present
restart_policy: unless-stopped
entrypoint: top
```
3. Notice by using the `docker ps` command that the container is recreated instead of updated.
##### EXPECTED RESULTS
I expect the container to be update with the new `restart_policy` by using the docker update command:
```sh
$ docker update --restart=<new policy>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The container is recreated instead of updated.
I currently run a `command` task before the container which updates the command.
```yml
# Update docker restart policy
- name: Update docker container restart policy
command: "docker update --restart={{ (restart_policy == false) | ternary('no', restart_policy) }} {{ container }}"
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/65993
|
https://github.com/ansible/ansible/pull/66192
|
ec34235e2ec30e74d6ee2967ea5ac8330bbb7d97
|
02c126f5ee451a7dc1b442d15a883bc3f86505d1
| 2019-12-20T10:11:48Z |
python
| 2020-01-06T19:49:48Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with C(--check) and C(--diff) to view config difference and list of actions to be taken.
version_added: "2.1"
notes:
- For most config changes, the container needs to be recreated, i.e. the existing container has to be destroyed and
a new one created. This can cause unexpected data loss and downtime. You can use the I(comparisons) option to
prevent this.
- If the module needs to recreate the container, it will only use the options provided to the module to create the
new container (except I(image)). Therefore, always specify *all* options relevant to the container.
- When I(restart) is set to C(true), the module will only restart the container if no config changes are detected.
Please note that several options have default values; if the container to be restarted uses different values for
these options, it will be recreated instead. The options with default values which can cause this are I(auto_remove),
I(detach), I(init), I(interactive), I(memory), I(paused), I(privileged), I(read_only) and I(tty). This behavior
can be changed by setting I(container_default_behavior) to C(no_defaults), which will be the default value from
Ansible 2.14 on.
options:
auto_remove:
description:
- Enable auto-removal of the container on daemon side when the container's process exits.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
elements: str
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
elements: str
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts. A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not.
- Only options which correspond to the state of a container as handled by the
Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to *all* comparisons which are not explicitly set to other values.
- See the examples for details.
type: dict
version_added: "2.8"
container_default_behavior:
description:
- Various module options used to have default values. This causes problems with
containers which use different values for these options.
- The default value is C(compatibility), which will ensure that the default values
are used when the values are not explicitly specified by the user.
- From Ansible 2.14 on, the default value will switch to C(no_defaults). To avoid
deprecation warnings, please set I(container_default_behavior) to an explicit
value.
- This affects the I(auto_remove), I(detach), I(init), I(interactive), I(memory),
I(paused), I(privileged), I(read_only) and I(tty) options.
type: str
choices:
- compatibility
- no_defaults
version_added: "2.10"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period.
- See I(cpus) for an easier to use alternative.
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota.
- See I(cpus) for an easier to use alternative.
type: int
cpus:
description:
- Specify how much of the available CPU resources a container can use.
- A value of C(1.5) means that at most one and a half CPU (core) will be used.
type: float
version_added: '2.10'
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1).
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
- If disabled, the task will reflect the status of the container run (failed if the command failed).
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(yes).
type: bool
devices:
description:
- List of host device bindings to add to the container.
- "Each binding is a mapping expressed in the format C(<path_on_host>:<path_in_container>:<cgroup_permissions>)."
type: list
elements: str
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- List of DNS options.
type: list
elements: str
dns_servers:
description:
- List of custom DNS servers.
type: list
elements: str
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
elements: str
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in I(env), then the I(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default C(ENTRYPOINT) of the image.
type: list
elements: str
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's C(/etc/hosts) file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
- If the port is already exposed using C(EXPOSE) in a Dockerfile, it does not
need to be exposed again.
type: list
elements: str
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
elements: str
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
- "See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work."
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
- The default used by the Docker daemon is C(30s).
type: str
timeout:
description:
- Maximum time to allow one check to run.
- The default used by the Docker daemon is C(30s).
type: str
retries:
description:
- Consecutive number of failures needed to report unhealthy.
- The default used by the Docker daemon is C(3).
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
- The default used by the Docker daemon is C(0s).
type: str
version_added: "2.8"
hostname:
description:
- The container's hostname.
type: str
ignore_image:
description:
- When I(state) is C(present) or C(started), the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If the image
version in the registry does not match the container, the container will be recreated. You can
stop this behavior by setting I(ignore_image) to C(True).
- "*Warning:* This option is ignored if C(image: ignore) or C(*: ignore) is specified in the
I(comparisons) option."
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The I(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
- This option requires Docker API >= 1.25.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ipc_mode:
description:
- Set the IPC mode for the container.
- Can be one of C(container:<name|id>) to reuse another container's IPC namespace or C(host) to use
the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
elements: str
log_driver:
description:
- Specify the logging driver. Docker uses C(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen I(log_driver).
- See U(https://docs.docker.com/engine/admin/logging/overview/) for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33).
type: str
memory:
description:
- "Memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C("0").
type: str
memory_reservation:
description:
- "Memory soft limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap) in format C(<number>[<unit>]).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited
from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
elements: dict
description:
- Specification for mounts to be added to the container. More powerful alternative to I(volumes).
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- bind
- npipe
- tmpfs
- volume
default: volume
read_only:
description:
- Whether the mount should be read-only.
type: bool
consistency:
description:
- The consistency requirement for the mount.
type: str
choices:
- cached
- consistent
- default
- delegated
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- private
- rprivate
- shared
- rshared
- slave
- rslave
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See
L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes in format <number>[<unit>]."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are C(bridge), C(host), C(none), C(container:<name|id>), C(<network_name>) or C(default).
- "*Note* that from Ansible 2.14 on, if I(networks_cli_compatible) is C(true) and I(networks) contains at least one network,
the default value for I(network_mode) will be the name of the first network in the I(networks) list. You can prevent this
by explicitly specifying a value for I(network_mode), like the default value C(default) which will be used by Docker if
I(network_mode) is not specified."
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value are C(host) and the empty string.
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the I(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if I(networks) is specified. You need to explicitly use I(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in I(networks)).
Alternatively, use the I(networks_cli_compatible) option, which will be enabled by default from Ansible 2.12 on.
type: list
elements: dict
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
elements: str
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
elements: str
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with I(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will *not* add the default network if I(networks) is
specified. If I(networks) is not specified, the default network will be attached."
- "*Note* that docker CLI also sets I(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set I(network_mode) to the name of the first network you're
adding. This behavior will change for Ansible 2.14: then I(network_mode) will
automatically be set to the first network name in I(networks) if I(network_mode)
is not specified, I(networks) has at least one entry and I(networks_cli_compatible)
is C(true)."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed.
- Only effective when I(log_driver) is set to C(json-file) or C(journald).
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports C(host). Newer versions of the
Docker SDK for Python (docker) allow all values supported by the Docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set C(-1) for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are *not* allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If I(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter C(com.docker.network.bridge.host_binding_ipv4).
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by C(com.docker.network.bridge.host_binding_ipv4).
Note that the first bridge network with a C(com.docker.network.bridge.host_binding_ipv4)
value encountered in the list of I(networks) is the one that will be used.
type: list
elements: str
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- "*Note:* images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled."
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in I(networks) parameter.
- Any default networks such as C(bridge), if not found in I(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
removal_wait_timeout:
description:
- When removing an existing container, the docker daemon API call exists after the container
is scheduled for removal. Removal usually is very fast, but it can happen that during high I/O
load, removal can take longer. By default, the module will wait until the container has been
removed before trying to (re-)create it, however long this takes.
- By setting this option, the module will wait at most this many seconds for the container to be
removed. If the container is still in the removal phase after this many seconds, the module will
fail.
type: float
version_added: "2.10"
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy.
- Place quotes around C(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) in format C(<number>[<unit>]). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, Docker daemon uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User").
type: list
elements: str
state:
description:
- 'C(absent) - A container matching the specified name will be stopped and removed. Use I(force_kill) to kill the container
rather than stopping it. Use I(keep_volumes) to retain volumes associated with the removed container.'
- 'C(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config.'
- 'C(started) - Asserts that the container is first C(present), and then if the container is not running moves it to a running
state. Use I(restart) to force a matching container to be stopped and restarted.'
- 'C(stopped) - Asserts that the container is first C(present), and then if the container is running moves it to a stopped
state.'
- To control what will be taken into account when comparing configuration, see the I(comparisons) option. To avoid that the
image version will be taken into account, you can also use the I(ignore_image) option.
- Use the I(recreate) option to always force re-creation of a matching container, even if it is running.
- If the container should be killed instead of stopped in case it needs to be stopped for recreation, or because I(state) is
C(stopped), please use the I(force_kill) option. Use I(keep_volumes) to retain volumes associated with a removed container.
- Use I(keep_volumes) to retain volumes associated with a removed container.
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending C(SIGKILL).
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
- The option has never been used by the module. It will be removed in Ansible 2.14.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory.
type: list
elements: str
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)."
type: list
elements: str
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be of the forms C(user), C(user:group), C(uid), C(uid:gid), C(user:gid) or C(uid:group)."
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
elements: str
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or IDs to get volumes from.
type: list
elements: str
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if I(state) is C(absent)
- If I(detached) is C(false), will include C(Output) attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from time import sleep
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpus = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.removal_wait_timeout = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.cpus is not None:
self.cpus = int(round(self.cpus * 1E9))
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, dummy = vol.split(':')
result.append(container)
continue
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
result.append(parts[1])
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
nano_cpus='cpus',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (vol.split(':') + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def removing(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Status') == 'removing'
return False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
restart_policy = host_config.get('RestartPolicy', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
restart_policy=restart_policy.get('Name'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
cpus=host_config.get('NanoCpus'),
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
)
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
network_info = connected_networks.get(network['name'])
if network_info is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
network_info_ipam = network_info.get('IPAMConfig') or {}
if network.get('ipv4_address') and network['ipv4_address'] != network_info_ipam.get('IPv4Address'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != network_info_ipam.get('IPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], network_info.get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, network_info.get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=network_info_ipam.get('IPv4Address'),
ipv6_address=network_info_ipam.get('IPv6Address'),
aliases=network_info.get('Aliases'),
links=network_info.get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
container = None
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
dummy, container, mode = vol.split(':') + ['rw']
new_vol = dict()
if container:
new_vol[container] = dict()
else:
new_vol[vol] = dict()
expected_vols.update(new_vol)
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def wait_for_state(self, container_id, complete_states=None, wait_states=None, accept_removal=False, max_wait=None):
delay = 1.0
total_wait = 0
while True:
# Inspect container
result = self.client.get_container_by_id(container_id)
if result is None:
if accept_removal:
return
msg = 'Encontered vanished container while waiting for container "{0}"'
self.fail(msg.format(container_id))
# Check container state
state = result.get('State', {}).get('Status')
if complete_states is not None and state in complete_states:
return
if wait_states is not None and state not in wait_states:
msg = 'Encontered unexpected state "{1}" while waiting for container "{0}"'
self.fail(msg.format(container_id, state))
# Wait
if max_wait is not None:
if total_wait > max_wait:
msg = 'Timeout of {1} seconds exceeded while waiting for container "{0}"'
self.fail(msg.format(container_id, max_wait))
if total_wait + delay > max_wait:
delay = max_wait - total_wait
sleep(delay)
total_wait += delay
# Exponential backoff, but never wait longer than 10 seconds
# (1.1**24 < 10, 1.1**25 > 10, so it will take 25 iterations
# until the maximal 10 seconds delay is reached. By then, the
# code will have slept for ~1.5 minutes.)
delay = min(delay * 1.1, 10)
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists or container.removing:
# New container
if container.removing:
self.log('Found container in removal phase')
else:
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
if container.removing and not self.check_mode:
# Wait for container to be removed before trying to create it
self.wait_for_state(
container.Id, wait_states=['removing'], accept_removal=True, max_wait=self.parameters.removal_wait_timeout)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
if not self.check_mode:
self.wait_for_state(
container.Id, wait_states=['removing'], accept_removal=True, max_wait=self.parameters.removal_wait_timeout)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and container.paused is not None and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not image or self.parameters.pull:
if not self.check_mode:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
elif not image:
# If the image isn't there, claim we'll pull.
# (Implicitly: if the image is there, claim it already was latest.)
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if self.parameters.detach is False:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused', 'removal_wait_timeout'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
cpus=dict(docker_py_version='2.3.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', docker_api_version='1.22',
detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
if self.module.params['container_default_behavior'] is None:
self.module.params['container_default_behavior'] = 'compatibility'
self.module.deprecate(
'The container_default_behavior option will change its default value from "compatibility" to '
'"no_defaults" in Ansible 2.14. To remove this warning, please specify an explicit value for it now',
version='2.14'
)
if self.module.params['container_default_behavior'] == 'compatibility':
old_default_values = dict(
auto_remove=False,
detach=True,
init=False,
interactive=False,
memory="0",
paused=False,
privileged=False,
read_only=False,
tty=False,
)
for param, value in old_default_values.items():
if self.module.params[param] is None:
self.module.params[param] = value
def main():
argument_spec = dict(
auto_remove=dict(type='bool'),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
container_default_behavior=dict(type='str', choices=['compatibility', 'no_defaults']),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpus=dict(type='float'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool'),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool'),
interactive=dict(type='bool'),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool'),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool'),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool'),
recreate=dict(type='bool', default=False),
removal_wait_timeout=dict(type='float'),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False, removed_in_version='2.14'),
tty=dict(type='bool'),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
if client.module.params['networks_cli_compatible'] is True and client.module.params['networks'] and client.module.params['network_mode'] is None:
client.module.deprecate(
'Please note that the default value for `network_mode` will change from not specified '
'(which is equal to `default`) to the name of the first network in `networks` if '
'`networks` has at least one entry and `networks_cli_compatible` is `true`. You can '
'change the behavior now by explicitly setting `network_mode` to the name of the first '
'network in `networks`, and remove this warning by setting `network_mode` to `default`. '
'Please make sure that the value you set to `network_mode` equals the inspection result '
'for existing containers, otherwise the module will recreate them. You can find out the '
'correct value by running "docker inspect --format \'{{.HostConfig.NetworkMode}}\' <container_name>"',
version='2.14'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,674 |
yum integration test failing on Fedora 31
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`yum` tests on Fedora 31 are failing
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/yum/`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
N/A
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I am unable to reproduce this outside of Shippable. The test command is `ansible-test integration --docker fedora31 yum -vvv`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Tests fail when trying to remove packages that are already removed with error `"lohit-*-fonts - No match for argument: lohit-*-fonts"`.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [yum : Install lohit-*-fonts] *********************************************
task path: /root/ansible/test/results/.tmp/integration/yum-gmgubc6r-ÅÑŚÌβŁÈ/test/integration/targets/yum/tasks/yum.yml:697
Running dnf as the backend for the yum action plugin
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417 `" && echo ansible-tmp-1575866355.2913747-83952290883417="` echo /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/packaging/os/dnf.py
<testhost> PUT /root/.ansible/tmp/ansible-local-21431wvciwh04/tmpcdjoe_t2 TO /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417/AnsiballZ_dnf.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417/ /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417/AnsiballZ_dnf.py && sleep 0'
<testhost> EXEC /bin/sh -c '/tmp/python-ecn_n5k0-ansible/python /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417/AnsiballZ_dnf.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575866355.2913747-83952290883417/ > /dev/null 2>&1 && sleep 0'
changed: [testhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"lohit-*-fonts"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: lohit-*-fonts",
"Installed: lohit-assamese-fonts-2.91.5-8.fc31.noarch",
"Installed: lohit-bengali-fonts-2.91.5-8.fc31.noarch",
"Installed: lohit-devanagari-fonts-2.95.4-9.fc31.noarch",
"Installed: lohit-gujarati-fonts-2.92.4-8.fc31.noarch",
"Installed: lohit-gurmukhi-fonts-2.91.2-9.fc31.noarch",
"Installed: lohit-kannada-fonts-2.5.4-7.fc31.noarch",
"Installed: lohit-malayalam-fonts-2.92.2-8.fc31.noarch",
"Installed: lohit-marathi-fonts-2.94.2-9.fc31.noarch",
"Installed: lohit-nepali-fonts-2.94.2-8.fc31.noarch",
"Installed: lohit-odia-fonts-2.91.2-8.fc31.noarch",
"Installed: lohit-tamil-classical-fonts-2.5.3-12.fc31.noarch",
"Installed: lohit-tamil-fonts-2.91.3-8.fc31.noarch",
"Installed: lohit-telugu-fonts-2.5.5-7.fc31.noarch"
]
}
TASK [yum : Remove lohit-*-fonts (1st time)] ***********************************
task path: /root/ansible/test/results/.tmp/integration/yum-gmgubc6r-ÅÑŚÌβŁÈ/test/integration/targets/yum/tasks/yum.yml:702
Running dnf as the backend for the yum action plugin
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201 `" && echo ansible-tmp-1575866359.546942-200187246790201="` echo /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/packaging/os/dnf.py
<testhost> PUT /root/.ansible/tmp/ansible-local-21431wvciwh04/tmp8bjabsgt TO /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201/AnsiballZ_dnf.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201/ /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201/AnsiballZ_dnf.py && sleep 0'
<testhost> EXEC /bin/sh -c '/tmp/python-ecn_n5k0-ansible/python /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201/AnsiballZ_dnf.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575866359.546942-200187246790201/ > /dev/null 2>&1 && sleep 0'
changed: [testhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"lohit-*-fonts"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Removed: lohit-gurmukhi-fonts-2.91.2-9.fc31.noarch",
"Removed: lohit-kannada-fonts-2.5.4-7.fc31.noarch",
"Removed: lohit-malayalam-fonts-2.92.2-8.fc31.noarch",
"Removed: lohit-marathi-fonts-2.94.2-9.fc31.noarch",
"Removed: lohit-nepali-fonts-2.94.2-8.fc31.noarch",
"Removed: lohit-odia-fonts-2.91.2-8.fc31.noarch",
"Removed: lohit-tamil-classical-fonts-2.5.3-12.fc31.noarch",
"Removed: lohit-tamil-fonts-2.91.3-8.fc31.noarch",
"Removed: lohit-telugu-fonts-2.5.5-7.fc31.noarch",
"Removed: lohit-assamese-fonts-2.91.5-8.fc31.noarch",
"Removed: lohit-bengali-fonts-2.91.5-8.fc31.noarch",
"Removed: lohit-devanagari-fonts-2.95.4-9.fc31.noarch",
"Removed: lohit-gujarati-fonts-2.92.4-8.fc31.noarch"
]
}
TASK [yum : Verify lohit-*-fonts (1st time)] ***********************************
task path: /root/ansible/test/results/.tmp/integration/yum-gmgubc6r-ÅÑŚÌβŁÈ/test/integration/targets/yum/tasks/yum.yml:708
ok: [testhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [yum : Remove lohit-*-fonts (2nd time)] ***********************************
task path: /root/ansible/test/results/.tmp/integration/yum-gmgubc6r-ÅÑŚÌβŁÈ/test/integration/targets/yum/tasks/yum.yml:715
Running dnf as the backend for the yum action plugin
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225 `" && echo ansible-tmp-1575866363.8594182-120915247525225="` echo /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/packaging/os/dnf.py
<testhost> PUT /root/.ansible/tmp/ansible-local-21431wvciwh04/tmppe580h5r TO /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225/AnsiballZ_dnf.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225/ /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225/AnsiballZ_dnf.py && sleep 0'
<testhost> EXEC /bin/sh -c '/tmp/python-ecn_n5k0-ansible/python /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225/AnsiballZ_dnf.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1575866363.8594182-120915247525225/ > /dev/null 2>&1 && sleep 0'
fatal: [testhost]: FAILED! => {
"changed": false,
"failures": [
"lohit-*-fonts - No match for argument: lohit-*-fonts"
],
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"lohit-*-fonts"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Failed to install some of the specified packages",
"rc": 1,
"results": []
}
```
|
https://github.com/ansible/ansible/issues/65674
|
https://github.com/ansible/ansible/pull/66218
|
02c126f5ee451a7dc1b442d15a883bc3f86505d1
|
22855f73fb5da10c9403cb5af4212a423973657f
| 2019-12-09T19:53:07Z |
python
| 2020-01-06T21:35:09Z |
test/integration/targets/yum/aliases
|
destructive
shippable/posix/group4
skip/freebsd
skip/osx
unstable
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,500 |
Exception when hcloud_server_info has image=null
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I'm trying to use `hcloud_server_info` module and I see the exception.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
module `hcloud_server_info`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.0
config file = /home/andor/work/Enkora/ansible/ansible.cfg
configured module search path = ['/home/andor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
tasks:
- name: collect hosts from hcloud api
hcloud_server_info:
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/home/andor/.ansible/tmp/ansible-tmp-1573035298.4213414-249660851701701/AnsiballZ_hcloud_server_info.py", line 102,
in <module>
_ansiballz_main()
File "/home/andor/.ansible/tmp/ansible-tmp-1573035298.4213414-249660851701701/AnsiballZ_hcloud_server_info.py", line 94,
in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/andor/.ansible/tmp/ansible-tmp-1573035298.4213414-249660851701701/AnsiballZ_hcloud_server_info.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.hcloud.hcloud_server_info', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python3.7/runpy.py", line 208, in run_module
return _run_code(code, {}, init_globals, run_name, mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/modules/cloud/hcloud/hcloud_server_info.py", line 214, in <module>
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/modules/cloud/hcloud/hcloud_server_info.py", line 199, in main
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/module_utils/hcloud.py", line 62, in get_result
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/modules/cloud/hcloud/hcloud_server_info.py", line 146, in _prepare_result
AttributeError: 'NoneType' object has no attribute 'name'
```
I guess that's because the field `image` is `null` for my server:
```
andor@titude:~$ curl -s -H "Authorization: Bearer $HCLOUD_TOKEN" 'https://api.hetzner.cloud/v1/servers?name=test-hostname' | jq '.servers[0] | .server_type.name, .image, .datacenter.location.city'
"cx20-ceph"
null
"Nuremberg"
```
And ansible module `hcloud_server_info` [relies](https://github.com/ansible/ansible/blob/86e4dcbaedba396691c731b89865b7d850e24dc6/lib/ansible/modules/cloud/hcloud/hcloud_server_info.py#L158) that field should be not-null:
```
"image": to_native(server.image.name),
```
And at the same time in the Hetzner API Documentation:

|
https://github.com/ansible/ansible/issues/64500
|
https://github.com/ansible/ansible/pull/66234
|
cb46e5f06b1c0ffcd16a9a4013c451011f16421d
|
8743ff02c04c4bfc2ad56bbe679912cb2e6a6dbd
| 2019-11-06T12:02:54Z |
python
| 2020-01-07T10:20:40Z |
lib/ansible/modules/cloud/hcloud/hcloud_server.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Hetzner Cloud GmbH <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
"metadata_version": "1.1",
"status": ["preview"],
"supported_by": "community",
}
DOCUMENTATION = """
---
module: hcloud_server
short_description: Create and manage cloud servers on the Hetzner Cloud.
version_added: "2.8"
description:
- Create, update and manage cloud servers on the Hetzner Cloud.
author:
- Lukas Kaemmerling (@LKaemmerling)
options:
id:
description:
- The ID of the Hetzner Cloud server to manage.
- Only required if no server I(name) is given
type: int
name:
description:
- The Name of the Hetzner Cloud server to manage.
- Only required if no server I(id) is given or a server does not exists.
type: str
server_type:
description:
- The Server Type of the Hetzner Cloud server to manage.
- Required if server does not exists.
type: str
ssh_keys:
description:
- List of SSH key names
- The key names correspond to the SSH keys configured for your
Hetzner Cloud account access.
type: list
volumes:
description:
- List of Volumes IDs that should be attached to the server on server creation.
type: list
image:
description:
- Image the server should be created from.
- Required if server does not exists.
type: str
location:
description:
- Location of Server.
- Required if no I(datacenter) is given and server does not exists.
type: str
datacenter:
description:
- Datacenter of Server.
- Required of no I(location) is given and server does not exists.
type: str
backups:
description:
- Enable or disable Backups for the given Server.
type: bool
default: no
upgrade_disk:
description:
- Resize the disk size, when resizing a server.
- If you want to downgrade the server later, this value should be False.
type: bool
default: no
force_upgrade:
description:
- Force the upgrade of the server.
- Power off the server if it is running on upgrade.
type: bool
default: no
user_data:
description:
- User Data to be passed to the server on creation.
- Only used if server does not exists.
type: str
rescue_mode:
description:
- Add the Hetzner rescue system type you want the server to be booted into.
type: str
version_added: 2.9
labels:
description:
- User-defined labels (key-value pairs).
type: dict
delete_protection:
description:
- Protect the Server for deletion.
- Needs to be the same as I(rebuild_protection).
type: bool
version_added: "2.10"
rebuild_protection:
description:
- Protect the Server for rebuild.
- Needs to be the same as I(delete_protection).
type: bool
version_added: "2.10"
state:
description:
- State of the server.
default: present
choices: [ absent, present, restarted, started, stopped, rebuild ]
type: str
extends_documentation_fragment: hcloud
"""
EXAMPLES = """
- name: Create a basic server
hcloud_server:
name: my-server
server_type: cx11
image: ubuntu-18.04
state: present
- name: Create a basic server with ssh key
hcloud_server:
name: my-server
server_type: cx11
image: ubuntu-18.04
location: fsn1
ssh_keys:
- me@myorganisation
state: present
- name: Resize an existing server
hcloud_server:
name: my-server
server_type: cx21
upgrade_disk: yes
state: present
- name: Ensure the server is absent (remove if needed)
hcloud_server:
name: my-server
state: absent
- name: Ensure the server is started
hcloud_server:
name: my-server
state: started
- name: Ensure the server is stopped
hcloud_server:
name: my-server
state: stopped
- name: Ensure the server is restarted
hcloud_server:
name: my-server
state: restarted
- name: Ensure the server is will be booted in rescue mode and therefore restarted
hcloud_server:
name: my-server
rescue_mode: linux64
state: restarted
- name: Ensure the server is rebuild
hcloud_server:
name: my-server
image: ubuntu-18.04
state: rebuild
"""
RETURN = """
hcloud_server:
description: The server instance
returned: Always
type: complex
contains:
id:
description: Numeric identifier of the server
returned: always
type: int
sample: 1937415
name:
description: Name of the server
returned: always
type: str
sample: my-server
status:
description: Status of the server
returned: always
type: str
sample: running
server_type:
description: Name of the server type of the server
returned: always
type: str
sample: cx11
ipv4_address:
description: Public IPv4 address of the server
returned: always
type: str
sample: 116.203.104.109
ipv6:
description: IPv6 network of the server
returned: always
type: str
sample: 2a01:4f8:1c1c:c140::/64
location:
description: Name of the location of the server
returned: always
type: str
sample: fsn1
datacenter:
description: Name of the datacenter of the server
returned: always
type: str
sample: fsn1-dc14
rescue_enabled:
description: True if rescue mode is enabled, Server will then boot into rescue system on next reboot
returned: always
type: bool
sample: false
backup_window:
description: Time window (UTC) in which the backup will run, or null if the backups are not enabled
returned: always
type: bool
sample: 22-02
labels:
description: User-defined labels (key-value pairs)
returned: always
type: dict
delete_protection:
description: True if server is protected for deletion
type: bool
returned: always
sample: false
version_added: "2.10"
rebuild_protection:
description: True if server is protected for rebuild
type: bool
returned: always
sample: false
version_added: "2.10"
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.hcloud import Hcloud
try:
from hcloud.volumes.domain import Volume
from hcloud.ssh_keys.domain import SSHKey
from hcloud.servers.domain import Server
from hcloud import APIException
except ImportError:
pass
class AnsibleHcloudServer(Hcloud):
def __init__(self, module):
Hcloud.__init__(self, module, "hcloud_server")
self.hcloud_server = None
def _prepare_result(self):
return {
"id": to_native(self.hcloud_server.id),
"name": to_native(self.hcloud_server.name),
"ipv4_address": to_native(self.hcloud_server.public_net.ipv4.ip),
"ipv6": to_native(self.hcloud_server.public_net.ipv6.ip),
"image": to_native(self.hcloud_server.image.name),
"server_type": to_native(self.hcloud_server.server_type.name),
"datacenter": to_native(self.hcloud_server.datacenter.name),
"location": to_native(self.hcloud_server.datacenter.location.name),
"rescue_enabled": self.hcloud_server.rescue_enabled,
"backup_window": to_native(self.hcloud_server.backup_window),
"labels": self.hcloud_server.labels,
"delete_protection": self.hcloud_server.protection["delete"],
"rebuild_protection": self.hcloud_server.protection["rebuild"],
"status": to_native(self.hcloud_server.status),
}
def _get_server(self):
try:
if self.module.params.get("id") is not None:
self.hcloud_server = self.client.servers.get_by_id(
self.module.params.get("id")
)
else:
self.hcloud_server = self.client.servers.get_by_name(
self.module.params.get("name")
)
except APIException as e:
self.module.fail_json(msg=e.message)
def _create_server(self):
self.module.fail_on_missing_params(
required_params=["name", "server_type", "image"]
)
params = {
"name": self.module.params.get("name"),
"server_type": self.client.server_types.get_by_name(
self.module.params.get("server_type")
),
"user_data": self.module.params.get("user_data"),
"labels": self.module.params.get("labels"),
}
if self.client.images.get_by_name(self.module.params.get("image")) is not None:
# When image name is not available look for id instead
params["image"] = self.client.images.get_by_name(self.module.params.get("image"))
else:
params["image"] = self.client.images.get_by_id(self.module.params.get("image"))
if self.module.params.get("ssh_keys") is not None:
params["ssh_keys"] = [
SSHKey(name=ssh_key_name)
for ssh_key_name in self.module.params.get("ssh_keys")
]
if self.module.params.get("volumes") is not None:
params["volumes"] = [
Volume(id=volume_id) for volume_id in self.module.params.get("volumes")
]
if self.module.params.get("location") is None and self.module.params.get("datacenter") is None:
# When not given, the API will choose the location.
params["location"] = None
params["datacenter"] = None
elif self.module.params.get("location") is not None and self.module.params.get("datacenter") is None:
params["location"] = self.client.locations.get_by_name(
self.module.params.get("location")
)
elif self.module.params.get("location") is None and self.module.params.get("datacenter") is not None:
params["datacenter"] = self.client.datacenters.get_by_name(
self.module.params.get("datacenter")
)
if not self.module.check_mode:
resp = self.client.servers.create(**params)
self.result["root_password"] = resp.root_password
resp.action.wait_until_finished(max_retries=1000)
[action.wait_until_finished() for action in resp.next_actions]
rescue_mode = self.module.params.get("rescue_mode")
if rescue_mode:
self._get_server()
self._set_rescue_mode(rescue_mode)
self._mark_as_changed()
self._get_server()
def _update_server(self):
try:
rescue_mode = self.module.params.get("rescue_mode")
if rescue_mode and self.hcloud_server.rescue_enabled is False:
if not self.module.check_mode:
self._set_rescue_mode(rescue_mode)
self._mark_as_changed()
elif not rescue_mode and self.hcloud_server.rescue_enabled is True:
if not self.module.check_mode:
self.hcloud_server.disable_rescue().wait_until_finished()
self._mark_as_changed()
if self.module.params.get("backups") and self.hcloud_server.backup_window is None:
if not self.module.check_mode:
self.hcloud_server.enable_backup().wait_until_finished()
self._mark_as_changed()
elif not self.module.params.get("backups") and self.hcloud_server.backup_window is not None:
if not self.module.check_mode:
self.hcloud_server.disable_backup().wait_until_finished()
self._mark_as_changed()
labels = self.module.params.get("labels")
if labels is not None and labels != self.hcloud_server.labels:
if not self.module.check_mode:
self.hcloud_server.update(labels=labels)
self._mark_as_changed()
server_type = self.module.params.get("server_type")
if server_type is not None and self.hcloud_server.server_type.name != server_type:
previous_server_status = self.hcloud_server.status
state = self.module.params.get("state")
if previous_server_status == Server.STATUS_RUNNING:
if not self.module.check_mode:
if self.module.params.get("force_upgrade") or state == "stopped":
self.stop_server() # Only stopped server can be upgraded
else:
self.module.warn(
"You can not upgrade a running instance %s. You need to stop the instance or use force_upgrade=yes."
% self.hcloud_server.name
)
timeout = 100
if self.module.params.get("upgrade_disk"):
timeout = (
1000
) # When we upgrade the disk too the resize progress takes some more time.
if not self.module.check_mode:
self.hcloud_server.change_type(
server_type=self.client.server_types.get_by_name(server_type),
upgrade_disk=self.module.params.get("upgrade_disk"),
).wait_until_finished(timeout)
if state == "present" and previous_server_status == Server.STATUS_RUNNING or state == "started":
self.start_server()
self._mark_as_changed()
delete_protection = self.module.params.get("delete_protection")
rebuild_protection = self.module.params.get("rebuild_protection")
if (delete_protection is not None and rebuild_protection is not None) and (
delete_protection != self.hcloud_server.protection["delete"] or rebuild_protection !=
self.hcloud_server.protection["rebuild"]):
if not self.module.check_mode:
self.hcloud_server.change_protection(delete=delete_protection,
rebuild=rebuild_protection).wait_until_finished()
self._mark_as_changed()
self._get_server()
except APIException as e:
self.module.fail_json(msg=e.message)
def _set_rescue_mode(self, rescue_mode):
if self.module.params.get("ssh_keys"):
resp = self.hcloud_server.enable_rescue(type=rescue_mode,
ssh_keys=[self.client.ssh_keys.get_by_name(ssh_key_name).id
for
ssh_key_name in
self.module.params.get("ssh_keys")])
else:
resp = self.hcloud_server.enable_rescue(type=rescue_mode)
resp.action.wait_until_finished()
self.result["root_password"] = resp.root_password
def start_server(self):
try:
if self.hcloud_server.status != Server.STATUS_RUNNING:
if not self.module.check_mode:
self.client.servers.power_on(self.hcloud_server).wait_until_finished()
self._mark_as_changed()
self._get_server()
except APIException as e:
self.module.fail_json(msg=e.message)
def stop_server(self):
try:
if self.hcloud_server.status != Server.STATUS_OFF:
if not self.module.check_mode:
self.client.servers.power_off(self.hcloud_server).wait_until_finished()
self._mark_as_changed()
self._get_server()
except APIException as e:
self.module.fail_json(msg=e.message)
def rebuild_server(self):
self.module.fail_on_missing_params(
required_params=["image"]
)
try:
if not self.module.check_mode:
self.client.servers.rebuild(self.hcloud_server, self.client.images.get_by_name(
self.module.params.get("image"))).wait_until_finished()
self._mark_as_changed()
self._get_server()
except APIException as e:
self.module.fail_json(msg=e.message)
def present_server(self):
self._get_server()
if self.hcloud_server is None:
self._create_server()
else:
self._update_server()
def delete_server(self):
try:
self._get_server()
if self.hcloud_server is not None:
if not self.module.check_mode:
self.client.servers.delete(self.hcloud_server).wait_until_finished()
self._mark_as_changed()
self.hcloud_server = None
except APIException as e:
self.module.fail_json(msg=e.message)
@staticmethod
def define_module():
return AnsibleModule(
argument_spec=dict(
id={"type": "int"},
name={"type": "str"},
image={"type": "str"},
server_type={"type": "str"},
location={"type": "str"},
datacenter={"type": "str"},
user_data={"type": "str"},
ssh_keys={"type": "list"},
volumes={"type": "list"},
labels={"type": "dict"},
backups={"type": "bool", "default": False},
upgrade_disk={"type": "bool", "default": False},
force_upgrade={"type": "bool", "default": False},
rescue_mode={"type": "str"},
delete_protection={"type": "bool"},
rebuild_protection={"type": "bool"},
state={
"choices": ["absent", "present", "restarted", "started", "stopped", "rebuild"],
"default": "present",
},
**Hcloud.base_module_arguments()
),
required_one_of=[['id', 'name']],
mutually_exclusive=[["location", "datacenter"]],
required_together=[["delete_protection", "rebuild_protection"]],
supports_check_mode=True,
)
def main():
module = AnsibleHcloudServer.define_module()
hcloud = AnsibleHcloudServer(module)
state = module.params.get("state")
if state == "absent":
hcloud.delete_server()
elif state == "present":
hcloud.present_server()
elif state == "started":
hcloud.present_server()
hcloud.start_server()
elif state == "stopped":
hcloud.present_server()
hcloud.stop_server()
elif state == "restarted":
hcloud.present_server()
hcloud.stop_server()
hcloud.start_server()
elif state == "rebuild":
hcloud.present_server()
hcloud.rebuild_server()
module.exit_json(**hcloud.get_result())
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,500 |
Exception when hcloud_server_info has image=null
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I'm trying to use `hcloud_server_info` module and I see the exception.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
module `hcloud_server_info`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.0
config file = /home/andor/work/Enkora/ansible/ansible.cfg
configured module search path = ['/home/andor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
tasks:
- name: collect hosts from hcloud api
hcloud_server_info:
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/home/andor/.ansible/tmp/ansible-tmp-1573035298.4213414-249660851701701/AnsiballZ_hcloud_server_info.py", line 102,
in <module>
_ansiballz_main()
File "/home/andor/.ansible/tmp/ansible-tmp-1573035298.4213414-249660851701701/AnsiballZ_hcloud_server_info.py", line 94,
in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/andor/.ansible/tmp/ansible-tmp-1573035298.4213414-249660851701701/AnsiballZ_hcloud_server_info.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.hcloud.hcloud_server_info', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python3.7/runpy.py", line 208, in run_module
return _run_code(code, {}, init_globals, run_name, mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/modules/cloud/hcloud/hcloud_server_info.py", line 214, in <module>
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/modules/cloud/hcloud/hcloud_server_info.py", line 199, in main
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/module_utils/hcloud.py", line 62, in get_result
File "/tmp/ansible_hcloud_server_info_payload_x1rpfqer/ansible_hcloud_server_info_payload.zip/ansible/modules/cloud/hcloud/hcloud_server_info.py", line 146, in _prepare_result
AttributeError: 'NoneType' object has no attribute 'name'
```
I guess that's because the field `image` is `null` for my server:
```
andor@titude:~$ curl -s -H "Authorization: Bearer $HCLOUD_TOKEN" 'https://api.hetzner.cloud/v1/servers?name=test-hostname' | jq '.servers[0] | .server_type.name, .image, .datacenter.location.city'
"cx20-ceph"
null
"Nuremberg"
```
And ansible module `hcloud_server_info` [relies](https://github.com/ansible/ansible/blob/86e4dcbaedba396691c731b89865b7d850e24dc6/lib/ansible/modules/cloud/hcloud/hcloud_server_info.py#L158) that field should be not-null:
```
"image": to_native(server.image.name),
```
And at the same time in the Hetzner API Documentation:

|
https://github.com/ansible/ansible/issues/64500
|
https://github.com/ansible/ansible/pull/66234
|
cb46e5f06b1c0ffcd16a9a4013c451011f16421d
|
8743ff02c04c4bfc2ad56bbe679912cb2e6a6dbd
| 2019-11-06T12:02:54Z |
python
| 2020-01-07T10:20:40Z |
lib/ansible/modules/cloud/hcloud/hcloud_server_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Hetzner Cloud GmbH <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
"metadata_version": "1.1",
"status": ["preview"],
"supported_by": "community",
}
DOCUMENTATION = """
---
module: hcloud_server_info
short_description: Gather infos about your Hetzner Cloud servers.
version_added: "2.8"
description:
- Gather infos about your Hetzner Cloud servers.
- This module was called C(hcloud_server_facts) before Ansible 2.9, returning C(ansible_facts) and C(hcloud_server_facts).
Note that the M(hcloud_server_info) module no longer returns C(ansible_facts) and the value was renamed to C(hcloud_server_info)!
author:
- Lukas Kaemmerling (@LKaemmerling)
options:
id:
description:
- The ID of the server you want to get.
type: int
name:
description:
- The name of the server you want to get.
type: str
label_selector:
description:
- The label selector for the server you want to get.
type: str
extends_documentation_fragment: hcloud
"""
EXAMPLES = """
- name: Gather hcloud server infos
hcloud_server_info:
register: output
- name: Print the gathered infos
debug:
var: output.hcloud_server_info
"""
RETURN = """
hcloud_server_info:
description: The server infos as list
returned: always
type: complex
contains:
id:
description: Numeric identifier of the server
returned: always
type: int
sample: 1937415
name:
description: Name of the server
returned: always
type: str
sample: my-server
status:
description: Status of the server
returned: always
type: str
sample: running
server_type:
description: Name of the server type of the server
returned: always
type: str
sample: cx11
ipv4_address:
description: Public IPv4 address of the server
returned: always
type: str
sample: 116.203.104.109
ipv6:
description: IPv6 network of the server
returned: always
type: str
sample: 2a01:4f8:1c1c:c140::/64
location:
description: Name of the location of the server
returned: always
type: str
sample: fsn1
datacenter:
description: Name of the datacenter of the server
returned: always
type: str
sample: fsn1-dc14
rescue_enabled:
description: True if rescue mode is enabled, Server will then boot into rescue system on next reboot
returned: always
type: bool
sample: false
backup_window:
description: Time window (UTC) in which the backup will run, or null if the backups are not enabled
returned: always
type: bool
sample: 22-02
labels:
description: User-defined labels (key-value pairs)
returned: always
type: dict
delete_protection:
description: True if server is protected for deletion
type: bool
returned: always
sample: false
version_added: "2.10"
rebuild_protection:
description: True if server is protected for rebuild
type: bool
returned: always
sample: false
version_added: "2.10"
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.hcloud import Hcloud
try:
from hcloud import APIException
except ImportError:
pass
class AnsibleHcloudServerInfo(Hcloud):
def __init__(self, module):
Hcloud.__init__(self, module, "hcloud_server_info")
self.hcloud_server_info = None
def _prepare_result(self):
tmp = []
for server in self.hcloud_server_info:
if server is not None:
tmp.append({
"id": to_native(server.id),
"name": to_native(server.name),
"ipv4_address": to_native(server.public_net.ipv4.ip),
"ipv6": to_native(server.public_net.ipv6.ip),
"image": to_native(server.image.name),
"server_type": to_native(server.server_type.name),
"datacenter": to_native(server.datacenter.name),
"location": to_native(server.datacenter.location.name),
"rescue_enabled": server.rescue_enabled,
"backup_window": to_native(server.backup_window),
"labels": server.labels,
"status": to_native(server.status),
"delete_protection": server.protection["delete"],
"rebuild_protection": server.protection["rebuild"],
})
return tmp
def get_servers(self):
try:
if self.module.params.get("id") is not None:
self.hcloud_server_info = [self.client.servers.get_by_id(
self.module.params.get("id")
)]
elif self.module.params.get("name") is not None:
self.hcloud_server_info = [self.client.servers.get_by_name(
self.module.params.get("name")
)]
elif self.module.params.get("label_selector") is not None:
self.hcloud_server_info = self.client.servers.get_all(
label_selector=self.module.params.get("label_selector"))
else:
self.hcloud_server_info = self.client.servers.get_all()
except APIException as e:
self.module.fail_json(msg=e.message)
@staticmethod
def define_module():
return AnsibleModule(
argument_spec=dict(
id={"type": "int"},
name={"type": "str"},
label_selector={"type": "str"},
**Hcloud.base_module_arguments()
),
supports_check_mode=True,
)
def main():
module = AnsibleHcloudServerInfo.define_module()
is_old_facts = module._name == 'hcloud_server_facts'
if is_old_facts:
module.deprecate("The 'hcloud_server_facts' module has been renamed to 'hcloud_server_info', "
"and the renamed one no longer returns ansible_facts", version='2.13')
hcloud = AnsibleHcloudServerInfo(module)
hcloud.get_servers()
result = hcloud.get_result()
if is_old_facts:
ansible_info = {
'hcloud_server_facts': result['hcloud_server_info']
}
module.exit_json(ansible_facts=ansible_info)
else:
ansible_info = {
'hcloud_server_info': result['hcloud_server_info']
}
module.exit_json(**ansible_info)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,722 |
Difference in how we handle templating vault variable keys in v2.8 and v2.9
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I can see difference in how we handle templating in 2.9 and 2.8. You could see when we run the same playbook on 2.9.2 it is not showing us value of "testvar" in following result which we can see in 2.8.4
```
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
**Ansible** : 2.8.x
**Ansible** : 2.9.x
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
Playbook.yml on 2.9.2
- name: TEST
gather_facts: False
hosts: localhost
vars:
testvar: "vars"
# vault password is: test
testdict2:
"test2_{{ testvar }}_password": !vault |
$ANSIBLE_VAULT;1.2;AES256;test
38636163353333363936333133633334646465393136613037626530663664326665613835326466
6130633637383730613462353037613139323232373762630a616237356230333163666632303262
32313232616539333664633364636562626539393439333539316236623161386163386533613063
6466396661323462620a396565646637616264316338636333343738653831383137643463653830
6433
tasks:
- debug:
var: testdict2
- set_fact:
big_test_list1: "{{ big_test_list1 | default([]) + item }}"
loop:
- "{{ testdict2 | dict2items }}"
- debug:
var: big_test_list1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook should show results as follows
```
**On 2.8.4**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': 'test2_vars_password', 'value': 'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_vars_password",
"value": "test"
}
]
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
**On 2.9.2**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': u'test2_{{ testvar }}_password', 'value': u'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
|
https://github.com/ansible/ansible/issues/65722
|
https://github.com/ansible/ansible/pull/65918
|
f9e315671a61d3fae93ef816c9d5793c9fc2267c
|
f8654de851a841d33d2d2e76942c68a758535e56
| 2019-12-11T10:36:53Z |
python
| 2020-01-07T14:41:37Z |
changelogs/fragments/65722-unsafe-tuples.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,722 |
Difference in how we handle templating vault variable keys in v2.8 and v2.9
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I can see difference in how we handle templating in 2.9 and 2.8. You could see when we run the same playbook on 2.9.2 it is not showing us value of "testvar" in following result which we can see in 2.8.4
```
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
**Ansible** : 2.8.x
**Ansible** : 2.9.x
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
Playbook.yml on 2.9.2
- name: TEST
gather_facts: False
hosts: localhost
vars:
testvar: "vars"
# vault password is: test
testdict2:
"test2_{{ testvar }}_password": !vault |
$ANSIBLE_VAULT;1.2;AES256;test
38636163353333363936333133633334646465393136613037626530663664326665613835326466
6130633637383730613462353037613139323232373762630a616237356230333163666632303262
32313232616539333664633364636562626539393439333539316236623161386163386533613063
6466396661323462620a396565646637616264316338636333343738653831383137643463653830
6433
tasks:
- debug:
var: testdict2
- set_fact:
big_test_list1: "{{ big_test_list1 | default([]) + item }}"
loop:
- "{{ testdict2 | dict2items }}"
- debug:
var: big_test_list1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook should show results as follows
```
**On 2.8.4**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': 'test2_vars_password', 'value': 'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_vars_password",
"value": "test"
}
]
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
**On 2.9.2**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': u'test2_{{ testvar }}_password', 'value': u'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
|
https://github.com/ansible/ansible/issues/65722
|
https://github.com/ansible/ansible/pull/65918
|
f9e315671a61d3fae93ef816c9d5793c9fc2267c
|
f8654de851a841d33d2d2e76942c68a758535e56
| 2019-12-11T10:36:53Z |
python
| 2020-01-07T14:41:37Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionLoader
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import AnsibleUnsafe, to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
__all__ = ['TaskExecutor']
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
# Modules that we optimize by squashing loop items into a single call to
# the module
SQUASH_ACTIONS = frozenset(C.DEFAULT_SQUASH_ACTIONS)
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results, and set the global changed/failed result flags based on any item.
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('Failed', False):
res['msg'] = 'All items completed'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# save the play context variables to a temporary dictionary,
# so that we can modify the job vars without doing a full copy
# and later restore them to avoid modifying things too early
play_context_vars = dict()
self._play_context.update_vars(play_context_vars)
old_vars = dict()
for k in play_context_vars:
if k in self._job_vars:
old_vars[k] = self._job_vars[k]
self._job_vars[k] = play_context_vars[k]
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
# now we restore any old job variables that may have been modified,
# and delete them if they were in the play context vars but not in
# the old variables dictionary
for k in play_context_vars:
if k in old_vars:
self._job_vars[k] = old_vars[k]
else:
del self._job_vars[k]
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
if self._task.loop_with:
# Only squash with 'with_:' not with the 'loop:', 'magic' squashing can be removed once with_ loops are
items = self._squash_items(items, loop_var, task_vars)
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.put(
TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
),
block=False,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _squash_items(self, items, loop_var, variables):
'''
Squash items down to a comma-separated list for certain modules which support it
(typically package management modules).
'''
name = None
try:
# _task.action could contain templatable strings (via action: and
# local_action:) Template it before comparing. If we don't end up
# optimizing it here, the templatable string might use template vars
# that aren't available until later (it could even use vars from the
# with_items loop) so don't make the templated string permanent yet.
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
task_action = self._task.action
if templar.is_template(task_action):
task_action = templar.template(task_action, fail_on_undefined=False)
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
if all(isinstance(o, string_types) for o in items):
final_items = []
found = None
for allowed in ['name', 'pkg', 'package']:
name = self._task.args.pop(allowed, None)
if name is not None:
found = allowed
break
# This gets the information to check whether the name field
# contains a template that we can squash for
template_no_item = template_with_item = None
if name:
if templar.is_template(name):
variables[loop_var] = '\0$'
template_no_item = templar.template(name, variables, cache=False)
variables[loop_var] = '\0@'
template_with_item = templar.template(name, variables, cache=False)
del variables[loop_var]
# Check if the user is doing some operation that doesn't take
# name/pkg or the name/pkg field doesn't have any variables
# and thus the items can't be squashed
if template_no_item != template_with_item:
if self._task.loop_with and self._task.loop_with not in ('items', 'list'):
value_text = "\"{{ query('%s', %r) }}\"" % (self._task.loop_with, self._task.loop)
else:
value_text = '%r' % self._task.loop
# Without knowing the data structure well, it's easiest to strip python2 unicode
# literals after stringifying
value_text = re.sub(r"\bu'", "'", value_text)
display.deprecated(
'Invoking "%s" only once while using a loop via squash_actions is deprecated. '
'Instead of using a loop to supply multiple items and specifying `%s: "%s"`, '
'please use `%s: %s` and remove the loop' % (self._task.action, found, name, found, value_text),
version='2.11'
)
for item in items:
variables[loop_var] = item
if self._task.evaluate_conditional(templar, variables):
new_item = templar.template(name, cache=False)
final_items.append(new_item)
self._task.args['name'] = final_items
# Wrap this in a list so that the calling function loop
# executes exactly once
return [final_items]
else:
# Restore the name parameter
self._task.args['name'] = name
# elif:
# Right now we only optimize single entries. In the future we
# could optimize more types:
# * lists can be squashed together
# * dicts could squash entries that match in all cases except the
# name or pkg field.
except Exception:
# Squashing is an optimization. If it fails for any reason,
# simply use the unoptimized list of items.
# Restore the name parameter
if name is not None:
self._task.args['name'] = name
return items
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
context_validation_error = None
try:
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
# FIXME: update connection/shell plugin options
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError:
# loop error takes precedence
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in ('include', 'include_tasks'):
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action == 'include_role':
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
self._task.post_validate(templar=templar)
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(variables=variables, templar=templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
self._set_connection_options(variables, templar)
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(self._task.action, self._task.args, self._task.module_defaults, templar)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
finally:
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# FIXME callback 'v2_runner_on_async_poll' here
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.put(TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs()), block=False)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# FIXME: we only want a limited set of variables here, so this is currently
# hardcoded but should be possibly fixed if we want more or if
# there is another source of truth we can use
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()).copy()
if len(delegated_vars) > 0:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in ('ansible_host', ):
result["_ansible_delegated_vars"][k] = delegated_vars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, variables, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
if self._task.delegate_to is not None:
# since we're delegating, we don't want to use interpreter values
# which would have been set for the original target host
for i in list(variables.keys()):
if isinstance(i, string_types) and i.startswith('ansible_') and i.endswith('_interpreter'):
del variables[i]
# now replace the interpreter values with those that may have come
# from the delegated-to host
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict())
if isinstance(delegated_vars, dict):
for i in delegated_vars:
if isinstance(i, string_types) and i.startswith("ansible_") and i.endswith("_interpreter"):
variables[i] = delegated_vars[i]
# load connection
conn_type = self._play_context.connection
connection = self._shared_loader_obj.connection_loader.get(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
become_plugin = None
if self._play_context.become:
become_plugin = self._get_become(self._play_context.become_method)
if getattr(become_plugin, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a tty which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Connection plugin does not support set_become_plugin
pass
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin)
# FIXME: remove once all plugins pull all data from self._options
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, variables, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, variables, templar):
final_vars = combine_vars(variables, variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()))
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
def _set_connection_options(self, variables, templar):
# Keep the pre-delegate values for these keys
PRESERVE_ORIG = ('inventory_hostname',)
# create copy with delegation built in
final_vars = combine_vars(
variables,
variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
)
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in PRESERVE_ORIG:
options[k] = templar.template(variables[k])
elif k in final_vars:
options[k] = templar.template(final_vars[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in final_vars:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(final_vars[k])
task_keys = self._task.dump_attrs()
# set options with 'templated vars' specific to this plugin and dependant ones
self._connection.set_options(task_keys=task_keys, var_options=options)
self._set_plugin_options('shell', final_vars, templar, task_keys)
if self._connection.become is not None:
# FIXME: find alternate route to provide passwords,
# keep out of play objects to avoid accidental disclosure
task_keys['become_pass'] = self._play_context.become_pass
self._set_plugin_options('become', final_vars, templar, task_keys)
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_prefix = self._task.action.split('.')[-1].split('_')[0]
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
# FIXME: is this code path even live anymore? check w/ networking folks; it trips sometimes when it shouldn't
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(module_prefix, collection_list=collections))):
handler_name = module_prefix
else:
# FUTURE: once we're comfortable with collections impl, preface this action with ansible.builtin so it can't be hijacked
handler_name = 'normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ['PATH'].split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATHS': os.pathsep.join(AnsibleCollectionLoader().n_collection_paths),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,722 |
Difference in how we handle templating vault variable keys in v2.8 and v2.9
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I can see difference in how we handle templating in 2.9 and 2.8. You could see when we run the same playbook on 2.9.2 it is not showing us value of "testvar" in following result which we can see in 2.8.4
```
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
**Ansible** : 2.8.x
**Ansible** : 2.9.x
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
Playbook.yml on 2.9.2
- name: TEST
gather_facts: False
hosts: localhost
vars:
testvar: "vars"
# vault password is: test
testdict2:
"test2_{{ testvar }}_password": !vault |
$ANSIBLE_VAULT;1.2;AES256;test
38636163353333363936333133633334646465393136613037626530663664326665613835326466
6130633637383730613462353037613139323232373762630a616237356230333163666632303262
32313232616539333664633364636562626539393439333539316236623161386163386533613063
6466396661323462620a396565646637616264316338636333343738653831383137643463653830
6433
tasks:
- debug:
var: testdict2
- set_fact:
big_test_list1: "{{ big_test_list1 | default([]) + item }}"
loop:
- "{{ testdict2 | dict2items }}"
- debug:
var: big_test_list1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook should show results as follows
```
**On 2.8.4**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': 'test2_vars_password', 'value': 'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_vars_password",
"value": "test"
}
]
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
**On 2.9.2**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': u'test2_{{ testvar }}_password', 'value': u'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
|
https://github.com/ansible/ansible/issues/65722
|
https://github.com/ansible/ansible/pull/65918
|
f9e315671a61d3fae93ef816c9d5793c9fc2267c
|
f8654de851a841d33d2d2e76942c68a758535e56
| 2019-12-11T10:36:53Z |
python
| 2020-01-07T14:41:37Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from contextlib import contextmanager
from numbers import Number
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils.six import iteritems, string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common._collections_compat import Sequence, Mapping, MutableMapping
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.safe_eval import safe_eval
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var
# HACK: keep Python 2.6 controller tests happy in CI until they're properly split
try:
from importlib import import_module
except ImportError:
import_module = __import__
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# A regex for checking to see if a variable we're trying to
# expand is just a single variable name.
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
USE_JINJA2_NATIVE = False
if C.DEFAULT_JINJA2_NATIVE:
try:
from jinja2.nativetypes import NativeEnvironment as Environment
from ansible.template.native_helpers import ansible_native_concat as j2_concat
USE_JINJA2_NATIVE = True
except ImportError:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
from jinja2 import __version__ as j2_version
display.warning(
'jinja2_native requires Jinja 2.10 and above. '
'Version detected: %s. Falling back to default.' % j2_version
)
else:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
def generate_ansible_template_vars(path, dest_path=None):
b_path = to_bytes(path)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_fullpath': os.path.abspath(path),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# This wraps a lot of code, but this is due to lex returing a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined'
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in value, delegate to base dict
return self._delegatee.__getitem__(key)
func = self._collection_jinja_func_cache.get(key)
if func:
return func
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
# FIXME: error handling for bogus plugin name, bogus impl, bogus filter/test
pkg = import_module(acr.n_python_package_name)
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
plugin_impl = self._pluginloader.get(module_name)
method_map = getattr(plugin_impl, self._method_map_name)
for f in iteritems(method_map()):
fq_name = '.'.join((parent_prefix, f[0]))
# FIXME: detect/warn on intra-collection function name collisions
self._collection_jinja_func_cache[fq_name] = f[1]
function_impl = self._collection_jinja_func_cache[key]
return function_impl
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
class AnsibleEnvironment(Environment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
variables = {} if variables is None else variables
self._loader = loader
self._filters = None
self._tests = None
self._available_variables = variables
self._cached_result = {}
if loader:
self._basedir = loader.get_basedir()
else:
self._basedir = './'
if shared_loader_obj:
self._filter_loader = getattr(shared_loader_obj, 'filter_loader')
self._test_loader = getattr(shared_loader_obj, 'test_loader')
self._lookup_loader = getattr(shared_loader_obj, 'lookup_loader')
else:
self._filter_loader = filter_loader
self._test_loader = test_loader
self._lookup_loader = lookup_loader
# flags to determine whether certain failures during templating
# should result in fatal errors being raised
self._fail_on_lookup_errors = True
self._fail_on_filter_errors = True
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
self.environment = AnsibleEnvironment(
trim_blocks=True,
undefined=AnsibleUndefined,
extensions=self._get_extensions(),
finalize=self._finalize,
loader=FileSystemLoader(self._basedir),
)
# the current rendering context under which the templar class is working
self.cur_context = None
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self._clean_regex = re.compile(r'(?:%s|%s|%s|%s)' % (
self.environment.variable_start_string,
self.environment.block_start_string,
self.environment.block_end_string,
self.environment.variable_end_string
))
self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' %
('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string))
def _get_filters(self):
'''
Returns filter plugins, after loading and caching them if need be
'''
if self._filters is not None:
return self._filters.copy()
self._filters = dict()
for fp in self._filter_loader.all():
self._filters.update(fp.filters())
return self._filters.copy()
def _get_tests(self):
'''
Returns tests plugins, after loading and caching them if need be
'''
if self._tests is not None:
return self._tests.copy()
self._tests = dict()
for fp in self._test_loader.all():
self._tests.update(fp.tests())
return self._tests.copy()
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, dict):
raise AnsibleAssertionError("the type of 'variables' should be a dict but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
def set_available_variables(self, variables):
display.deprecated(
'set_available_variables is being deprecated. Use "@available_variables.setter" instead.',
version='2.13'
)
self.available_variables = variables
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [''] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
result = variable
if self.is_possibly_template(variable):
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if cache and sha1_hash in self._cached_result:
result = self._cached_result[sha1_hash]
else:
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
if not USE_JINJA2_NATIVE:
unsafe = hasattr(result, '__UNSAFE__')
if convert_data and not self._no_type_regex.match(variable):
# if this looks like a dictionary or list, convert it to such using the safe_eval method
if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \
result.startswith("[") or result in ("True", "False"):
eval_results = safe_eval(result, include_exceptions=True)
if eval_results[1] is None:
result = eval_results[0]
if unsafe:
result = wrap_var(result)
else:
# FIXME: if the safe_eval raised an error, should we do something with it?
pass
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache:
self._cached_result[sha1_hash] = result
return result
elif isinstance(variable, (list, tuple)):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, (dict, Mapping)):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
except AnsibleFilterError:
if self._fail_on_filter_errors:
raise
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
'''Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
'''
env = self.environment
if isinstance(data, string_types):
for marker in (env.block_start_string, env.variable_start_string, env.comment_start_string):
if marker in data:
return True
return False
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _finalize(self, thing):
'''
A custom finalize method for jinja2, which prevents None from being returned. This
avoids a string of ``"None"`` as ``None`` has no importance in YAML.
If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always
'''
if USE_JINJA2_NATIVE:
return thing
return thing if thing is not None else ''
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = self._lookup_loader.get(name.lower(), loader=self._loader, templar=self)
if instance is not None:
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
from ansible.utils.listify import listify_lookup_plugin_terms
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except Exception as e:
if self._fail_on_lookup_errors:
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise AnsibleError(to_native(msg))
ran = [] if wantlist else None
if ran and not allow_unsafe:
if wantlist:
ran = wrap_var(ran)
else:
try:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
if self.cur_context:
self.cur_context.unsafe = True
return ran
else:
raise AnsibleError("lookup plugin (%s) not found" % name)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False):
if USE_JINJA2_NATIVE and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
# allows template header overrides to change jinja2 options.
if overrides is None:
myenv = self.environment.overlay()
else:
myenv = self.environment.overlay(overrides)
# Get jinja env overrides from template
if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE):
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
(key, val) = pair.split(':')
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
# Adds Ansible custom filters and tests
myenv.filters.update(self._get_filters())
myenv.tests.update(self._get_tests())
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
# jinja2 global is inconsistent across versions, this normalizes them
t.globals['dict'] = dict
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
else:
t.globals['lookup'] = self._lookup
t.globals['query'] = t.globals['q'] = self._query_lookup
t.globals['now'] = self._now_datetime
t.globals['finalize'] = self._finalize
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
res = j2_concat(rf)
if getattr(new_context, 'unsafe', False):
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if USE_JINJA2_NATIVE and not isinstance(res, string_types):
return res
if preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# jinja2 added a keep_trailing_newline option in 2.7 when
# creating an Environment. That would let us make this code
# better (remove a single newline if
# preserve_trailing_newlines is False). Once we can depend on
# that version being present, modify our code to set that when
# initializing self.environment and remove a single trailing
# newline here if preserve_newlines is False.
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,722 |
Difference in how we handle templating vault variable keys in v2.8 and v2.9
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I can see difference in how we handle templating in 2.9 and 2.8. You could see when we run the same playbook on 2.9.2 it is not showing us value of "testvar" in following result which we can see in 2.8.4
```
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
**Ansible** : 2.8.x
**Ansible** : 2.9.x
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
Playbook.yml on 2.9.2
- name: TEST
gather_facts: False
hosts: localhost
vars:
testvar: "vars"
# vault password is: test
testdict2:
"test2_{{ testvar }}_password": !vault |
$ANSIBLE_VAULT;1.2;AES256;test
38636163353333363936333133633334646465393136613037626530663664326665613835326466
6130633637383730613462353037613139323232373762630a616237356230333163666632303262
32313232616539333664633364636562626539393439333539316236623161386163386533613063
6466396661323462620a396565646637616264316338636333343738653831383137643463653830
6433
tasks:
- debug:
var: testdict2
- set_fact:
big_test_list1: "{{ big_test_list1 | default([]) + item }}"
loop:
- "{{ testdict2 | dict2items }}"
- debug:
var: big_test_list1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook should show results as follows
```
**On 2.8.4**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': 'test2_vars_password', 'value': 'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_vars_password",
"value": "test"
}
]
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
**On 2.9.2**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': u'test2_{{ testvar }}_password', 'value': u'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
|
https://github.com/ansible/ansible/issues/65722
|
https://github.com/ansible/ansible/pull/65918
|
f9e315671a61d3fae93ef816c9d5793c9fc2267c
|
f8654de851a841d33d2d2e76942c68a758535e56
| 2019-12-11T10:36:53Z |
python
| 2020-01-07T14:41:37Z |
lib/ansible/utils/unsafe_proxy.py
|
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation
# ("PSF"), and the Individual or Organization ("Licensee") accessing and
# otherwise using this software ("Python") in source or binary form and
# its associated documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
# analyze, test, perform and/or display publicly, prepare derivative works,
# distribute, and otherwise use Python alone or in any derivative version,
# provided, however, that PSF's License Agreement and PSF's notice of copyright,
# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are
# retained in Python alone or in any derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on
# or incorporates Python or any part thereof, and wants to make
# the derivative work available to others as provided herein, then
# Licensee hereby agrees to include in any such work a brief summary of
# the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS"
# basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
# IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
# INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material
# breach of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee
# agrees to be bound by the terms and conditions of this License
# Agreement.
#
# Original Python Recipe for Proxy:
# http://code.activestate.com/recipes/496741-object-proxying/
# Author: Tomer Filiba
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common._collections_compat import Mapping, MutableSequence, Set
from ansible.module_utils.six import string_types, binary_type, text_type
__all__ = ['AnsibleUnsafe', 'wrap_var']
class AnsibleUnsafe(object):
__UNSAFE__ = True
class AnsibleUnsafeBytes(binary_type, AnsibleUnsafe):
def decode(self, *args, **kwargs):
"""Wrapper method to ensure type conversions maintain unsafe context"""
return AnsibleUnsafeText(super(AnsibleUnsafeBytes, self).decode(*args, **kwargs))
class AnsibleUnsafeText(text_type, AnsibleUnsafe):
def encode(self, *args, **kwargs):
"""Wrapper method to ensure type conversions maintain unsafe context"""
return AnsibleUnsafeBytes(super(AnsibleUnsafeText, self).encode(*args, **kwargs))
class UnsafeProxy(object):
def __new__(cls, obj, *args, **kwargs):
from ansible.utils.display import Display
Display().deprecated(
'UnsafeProxy is being deprecated. Use wrap_var or AnsibleUnsafeBytes/AnsibleUnsafeText directly instead',
version='2.13'
)
# In our usage we should only receive unicode strings.
# This conditional and conversion exists to sanity check the values
# we're given but we may want to take it out for testing and sanitize
# our input instead.
if isinstance(obj, AnsibleUnsafe):
return obj
if isinstance(obj, string_types):
obj = AnsibleUnsafeText(to_text(obj, errors='surrogate_or_strict'))
return obj
def _wrap_dict(v):
for k in v.keys():
if v[k] is not None:
v[wrap_var(k)] = wrap_var(v[k])
return v
def _wrap_list(v):
for idx, item in enumerate(v):
if item is not None:
v[idx] = wrap_var(item)
return v
def _wrap_set(v):
return set(item if item is None else wrap_var(item) for item in v)
def wrap_var(v):
if v is None or isinstance(v, AnsibleUnsafe):
return v
if isinstance(v, Mapping):
v = _wrap_dict(v)
elif isinstance(v, MutableSequence):
v = _wrap_list(v)
elif isinstance(v, Set):
v = _wrap_set(v)
elif isinstance(v, binary_type):
v = AnsibleUnsafeBytes(v)
elif isinstance(v, text_type):
v = AnsibleUnsafeText(v)
return v
def to_unsafe_bytes(*args, **kwargs):
return wrap_var(to_bytes(*args, **kwargs))
def to_unsafe_text(*args, **kwargs):
return wrap_var(to_text(*args, **kwargs))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,722 |
Difference in how we handle templating vault variable keys in v2.8 and v2.9
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I can see difference in how we handle templating in 2.9 and 2.8. You could see when we run the same playbook on 2.9.2 it is not showing us value of "testvar" in following result which we can see in 2.8.4
```
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
**Ansible** : 2.8.x
**Ansible** : 2.9.x
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
Playbook.yml on 2.9.2
- name: TEST
gather_facts: False
hosts: localhost
vars:
testvar: "vars"
# vault password is: test
testdict2:
"test2_{{ testvar }}_password": !vault |
$ANSIBLE_VAULT;1.2;AES256;test
38636163353333363936333133633334646465393136613037626530663664326665613835326466
6130633637383730613462353037613139323232373762630a616237356230333163666632303262
32313232616539333664633364636562626539393439333539316236623161386163386533613063
6466396661323462620a396565646637616264316338636333343738653831383137643463653830
6433
tasks:
- debug:
var: testdict2
- set_fact:
big_test_list1: "{{ big_test_list1 | default([]) + item }}"
loop:
- "{{ testdict2 | dict2items }}"
- debug:
var: big_test_list1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook should show results as follows
```
**On 2.8.4**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': 'test2_vars_password', 'value': 'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_vars_password",
"value": "test"
}
]
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
**On 2.9.2**
Vault password:
PLAY [TEST] *******************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"testdict2": {
"test2_{{ testvar }}_password": "test"
}
}
TASK [set_fact] ***************************************************************************************************************************************************************************************************
ok: [localhost] => (item=[{'key': u'test2_{{ testvar }}_password', 'value': u'test'}])
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"big_test_list1": [
{
"key": "test2_{{ testvar }}_password",
"value": "test"
}
]
}
```
|
https://github.com/ansible/ansible/issues/65722
|
https://github.com/ansible/ansible/pull/65918
|
f9e315671a61d3fae93ef816c9d5793c9fc2267c
|
f8654de851a841d33d2d2e76942c68a758535e56
| 2019-12-11T10:36:53Z |
python
| 2020-01-07T14:41:37Z |
test/units/utils/test_unsafe_proxy.py
|
# -*- coding: utf-8 -*-
# (c) 2018 Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.six import PY3
from ansible.utils.unsafe_proxy import AnsibleUnsafe, AnsibleUnsafeBytes, AnsibleUnsafeText, wrap_var
def test_wrap_var_text():
assert isinstance(wrap_var(u'foo'), AnsibleUnsafeText)
def test_wrap_var_bytes():
assert isinstance(wrap_var(b'foo'), AnsibleUnsafeBytes)
def test_wrap_var_string():
if PY3:
assert isinstance(wrap_var('foo'), AnsibleUnsafeText)
else:
assert isinstance(wrap_var('foo'), AnsibleUnsafeBytes)
def test_wrap_var_dict():
assert isinstance(wrap_var(dict(foo='bar')), dict)
assert not isinstance(wrap_var(dict(foo='bar')), AnsibleUnsafe)
assert isinstance(wrap_var(dict(foo=u'bar'))['foo'], AnsibleUnsafeText)
def test_wrap_var_dict_None():
assert wrap_var(dict(foo=None))['foo'] is None
assert not isinstance(wrap_var(dict(foo=None))['foo'], AnsibleUnsafe)
def test_wrap_var_list():
assert isinstance(wrap_var(['foo']), list)
assert not isinstance(wrap_var(['foo']), AnsibleUnsafe)
assert isinstance(wrap_var([u'foo'])[0], AnsibleUnsafeText)
def test_wrap_var_list_None():
assert wrap_var([None])[0] is None
assert not isinstance(wrap_var([None])[0], AnsibleUnsafe)
def test_wrap_var_set():
assert isinstance(wrap_var(set(['foo'])), set)
assert not isinstance(wrap_var(set(['foo'])), AnsibleUnsafe)
for item in wrap_var(set([u'foo'])):
assert isinstance(item, AnsibleUnsafeText)
def test_wrap_var_set_None():
for item in wrap_var(set([None])):
assert item is None
assert not isinstance(item, AnsibleUnsafe)
def test_wrap_var_tuple():
assert isinstance(wrap_var(('foo',)), tuple)
assert not isinstance(wrap_var(('foo',)), AnsibleUnsafe)
assert isinstance(wrap_var(('foo',))[0], type(''))
assert not isinstance(wrap_var(('foo',))[0], AnsibleUnsafe)
def test_wrap_var_None():
assert wrap_var(None) is None
assert not isinstance(wrap_var(None), AnsibleUnsafe)
def test_wrap_var_unsafe_text():
assert isinstance(wrap_var(AnsibleUnsafeText(u'foo')), AnsibleUnsafeText)
def test_wrap_var_unsafe_bytes():
assert isinstance(wrap_var(AnsibleUnsafeBytes(b'foo')), AnsibleUnsafeBytes)
def test_AnsibleUnsafeText():
assert isinstance(AnsibleUnsafeText(u'foo'), AnsibleUnsafe)
def test_AnsibleUnsafeBytes():
assert isinstance(AnsibleUnsafeBytes(b'foo'), AnsibleUnsafe)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,505 |
Wrong example in loop over dict
|
##### SUMMARY
It seems that the example given for Iterating over a dictionary in https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#iterating-over-a-dictionary is wrong. "tags" cannot be used as variable name.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
dict2items filter?
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/Users/lochou/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:23) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
```
[empty]
```
##### OS / ENVIRONMENT
macOS 10.15.1
##### ADDITIONAL INFORMATION
With the example that is given:
```
- hosts: localhost
connection: local
tasks:
- name: create a tag dictionary of non-empty tags
set_fact:
tags_dict: "{{ (tags_dict|default({}))|combine({item.key: item.value}) }}"
loop: "{{ tags |dict2items }}"
vars:
tags:
Environment: dev
Application: payment
Another: "{{ doesnotexist|default() }}"
when: item.value != ""
- debug:
var: tags_dict
```
I just get the following error:
```
TASK [create a tag dictionary of non-empty tags] ***************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "dict2items requires a dictionary, got <class 'ansible.template.AnsibleUndefined'> instead."}
```
But everything runs fine when replacing "tags" by any other variable name such as foo. (Tested also with Ansible 2.8.7.)
|
https://github.com/ansible/ansible/issues/65505
|
https://github.com/ansible/ansible/pull/66235
|
f8654de851a841d33d2d2e76942c68a758535e56
|
469f104ec27e99ea26d42685ba027fe03571fac9
| 2019-12-04T12:17:58Z |
python
| 2020-01-07T15:32:46Z |
changelogs/fragments/dict2items.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,505 |
Wrong example in loop over dict
|
##### SUMMARY
It seems that the example given for Iterating over a dictionary in https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#iterating-over-a-dictionary is wrong. "tags" cannot be used as variable name.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
dict2items filter?
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/Users/lochou/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:23) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
```
[empty]
```
##### OS / ENVIRONMENT
macOS 10.15.1
##### ADDITIONAL INFORMATION
With the example that is given:
```
- hosts: localhost
connection: local
tasks:
- name: create a tag dictionary of non-empty tags
set_fact:
tags_dict: "{{ (tags_dict|default({}))|combine({item.key: item.value}) }}"
loop: "{{ tags |dict2items }}"
vars:
tags:
Environment: dev
Application: payment
Another: "{{ doesnotexist|default() }}"
when: item.value != ""
- debug:
var: tags_dict
```
I just get the following error:
```
TASK [create a tag dictionary of non-empty tags] ***************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "dict2items requires a dictionary, got <class 'ansible.template.AnsibleUndefined'> instead."}
```
But everything runs fine when replacing "tags" by any other variable name such as foo. (Tested also with Ansible 2.8.7.)
|
https://github.com/ansible/ansible/issues/65505
|
https://github.com/ansible/ansible/pull/66235
|
f8654de851a841d33d2d2e76942c68a758535e56
|
469f104ec27e99ea26d42685ba027fe03571fac9
| 2019-12-04T12:17:58Z |
python
| 2020-01-07T15:32:46Z |
docs/docsite/rst/user_guide/playbooks_loops.rst
|
.. _playbooks_loops:
*****
Loops
*****
Sometimes you want to repeat a task multiple times. In computer programming, this is called a loop. Common Ansible loops include changing ownership on several files and/or directories with the :ref:`file module <file_module>`, creating multiple users with the :ref:`user module <user_module>`, and
repeating a polling step until a certain result is reached. Ansible offers two keywords for creating loops: ``loop`` and ``with_<lookup>``.
.. note::
* We added ``loop`` in Ansible 2.5. It is not yet a full replacement for ``with_<lookup>``, but we recommend it for most use cases.
* We have not deprecated the use of ``with_<lookup>`` - that syntax will still be valid for the foreseeable future.
* We are looking to improve ``loop`` syntax - watch this page and the `changelog <https://github.com/ansible/ansible/tree/devel/changelogs>`_ for updates.
.. contents::
:local:
Comparing ``loop`` and ``with_*``
=================================
* The ``with_<lookup>`` keywords rely on :ref:`lookup_plugins` - even ``items`` is a lookup.
* The ``loop`` keyword is equivalent to ``with_list``, and is the best choice for simple loops.
* The ``loop`` keyword will not accept a string as input, see :ref:`query_vs_lookup`.
* Generally speaking, any use of ``with_*`` covered in :ref:`migrating_to_loop` can be updated to use ``loop``.
* Be careful when changing ``with_items`` to ``loop``, as ``with_items`` performed implicit single-level flattening. You may need to use ``flatten(1)`` with ``loop`` to match the exact outcome. For example, to get the same output as:
.. code-block:: yaml
with_items:
- 1
- [2,3]
- 4
you would need::
loop: "{{ [1, [2,3] ,4] | flatten(1) }}"
* Any ``with_*`` statement that requires using ``lookup`` within a loop should not be converted to use the ``loop`` keyword. For example, instead of doing:
.. code-block:: yaml
loop: "{{ lookup('fileglob', '*.txt', wantlist=True) }}"
it's cleaner to keep::
with_fileglob: '*.txt'
.. _standard_loops:
Standard loops
==============
Iterating over a simple list
----------------------------
Repeated tasks can be written as standard loops over a simple list of strings. You can define the list directly in the task::
- name: add several users
user:
name: "{{ item }}"
state: present
groups: "wheel"
loop:
- testuser1
- testuser2
You can define the list in a variables file, or in the 'vars' section of your play, then refer to the name of the list in the task::
loop: "{{ somelist }}"
Either of these examples would be the equivalent of::
- name: add user testuser1
user:
name: "testuser1"
state: present
groups: "wheel"
- name: add user testuser2
user:
name: "testuser2"
state: present
groups: "wheel"
You can pass a list directly to a parameter for some plugins. Most of the packaging modules, like :ref:`yum <yum_module>` and :ref:`apt <apt_module>`, have this capability. When available, passing the list to a parameter is better than looping over the task. For example::
- name: optimal yum
yum:
name: "{{ list_of_packages }}"
state: present
- name: non-optimal yum, slower and may cause issues with interdependencies
yum:
name: "{{ item }}"
state: present
loop: "{{ list_of_packages }}"
Check the :ref:`module documentation <modules_by_category>` to see if you can pass a list to any particular module's parameter(s).
Iterating over a list of hashes
-------------------------------
If you have a list of hashes, you can reference subkeys in a loop. For example::
- name: add several users
user:
name: "{{ item.name }}"
state: present
groups: "{{ item.groups }}"
loop:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }
When combining :ref:`conditionals <playbooks_conditionals>` with a loop, the ``when:`` statement is processed separately for each item.
See :ref:`the_when_statement` for examples.
Iterating over a dictionary
---------------------------
To loop over a dict, use the :ref:`dict2items <dict_filter>`:
.. code-block:: yaml
- name: create a tag dictionary of non-empty tags
set_fact:
tags_dict: "{{ (tags_dict|default({}))|combine({item.key: item.value}) }}"
loop: "{{ tags|dict2items }}"
vars:
tags:
Environment: dev
Application: payment
Another: "{{ doesnotexist|default() }}"
when: item.value != ""
Here, we don't want to set empty tags, so we create a dictionary containing only non-empty tags.
Registering variables with a loop
=================================
You can register the output of a loop as a variable. For example::
- shell: "echo {{ item }}"
loop:
- "one"
- "two"
register: echo
When you use ``register`` with a loop, the data structure placed in the variable will contain a ``results`` attribute that is a list of all responses from the module. This differs from the data structure returned when using ``register`` without a loop::
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": "echo \"one\" ",
"delta": "0:00:00.003110",
"end": "2013-12-19 12:00:05.187153",
"invocation": {
"module_args": "echo \"one\"",
"module_name": "shell"
},
"item": "one",
"rc": 0,
"start": "2013-12-19 12:00:05.184043",
"stderr": "",
"stdout": "one"
},
{
"changed": true,
"cmd": "echo \"two\" ",
"delta": "0:00:00.002920",
"end": "2013-12-19 12:00:05.245502",
"invocation": {
"module_args": "echo \"two\"",
"module_name": "shell"
},
"item": "two",
"rc": 0,
"start": "2013-12-19 12:00:05.242582",
"stderr": "",
"stdout": "two"
}
]
}
Subsequent loops over the registered variable to inspect the results may look like::
- name: Fail if return code is not 0
fail:
msg: "The command ({{ item.cmd }}) did not have a 0 return code"
when: item.rc != 0
loop: "{{ echo.results }}"
During iteration, the result of the current item will be placed in the variable::
- shell: echo "{{ item }}"
loop:
- one
- two
register: echo
changed_when: echo.stdout != "one"
.. _complex_loops:
Complex loops
=============
Iterating over nested lists
---------------------------
You can use Jinja2 expressions to iterate over complex lists. For example, a loop can combine nested lists::
- name: give users access to multiple databases
mysql_user:
name: "{{ item[0] }}"
priv: "{{ item[1] }}.*:ALL"
append_privs: yes
password: "foo"
loop: "{{ ['alice', 'bob'] |product(['clientdb', 'employeedb', 'providerdb'])|list }}"
.. _do_until_loops:
Retrying a task until a condition is met
----------------------------------------
.. versionadded:: 1.4
You can use the ``until`` keyword to retry a task until a certain condition is met. Here's an example::
- shell: /usr/bin/foo
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10
This task runs up to 5 times with a delay of 10 seconds between each attempt. If the result of any attempt has "all systems go" in its stdout, the task succeeds. The default value for "retries" is 3 and "delay" is 5.
To see the results of individual retries, run the play with ``-vv``.
When you run a task with ``until`` and register the result as a variable, the registered variable will include a key called "attempts", which records the number of the retries for the task.
.. note:: You must set the ``until`` parameter if you want a task to retry. If ``until`` is not defined, the value for the ``retries`` parameter is forced to 1.
Looping over inventory
----------------------
To loop over your inventory, or just a subset of it, you can use a regular ``loop`` with the ``ansible_play_batch`` or ``groups`` variables::
# show all the hosts in the inventory
- debug:
msg: "{{ item }}"
loop: "{{ groups['all'] }}"
# show all the hosts in the current play
- debug:
msg: "{{ item }}"
loop: "{{ ansible_play_batch }}"
There is also a specific lookup plugin ``inventory_hostnames`` that can be used like this::
# show all the hosts in the inventory
- debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all') }}"
# show all the hosts matching the pattern, ie all but the group www
- debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all:!www') }}"
More information on the patterns can be found in :ref:`intro_patterns`.
.. _query_vs_lookup:
Ensuring list input for ``loop``: ``query`` vs. ``lookup``
==========================================================
The ``loop`` keyword requires a list as input, but the ``lookup`` keyword returns a string of comma-separated values by default. Ansible 2.5 introduced a new Jinja2 function named :ref:`query <query>` that always returns a list, offering a simpler interface and more predictable output from lookup plugins when using the ``loop`` keyword.
You can force ``lookup`` to return a list to ``loop`` by using ``wantlist=True``, or you can use ``query`` instead.
These examples do the same thing::
loop: "{{ query('inventory_hostnames', 'all') }}"
loop: "{{ lookup('inventory_hostnames', 'all', wantlist=True) }}"
.. _loop_control:
Adding controls to loops
========================
.. versionadded:: 2.1
The ``loop_control`` keyword lets you manage your loops in useful ways.
Limiting loop output with ``label``
-----------------------------------
.. versionadded:: 2.2
When looping over complex data structures, the console output of your task can be enormous. To limit the displayed output, use the ``label`` directive with ``loop_control``::
- name: create servers
digital_ocean:
name: "{{ item.name }}"
state: present
loop:
- name: server1
disks: 3gb
ram: 15Gb
network:
nic01: 100Gb
nic02: 10Gb
...
loop_control:
label: "{{ item.name }}"
The output of this task will display just the ``name`` field for each ``item`` instead of the entire contents of the multi-line ``{{ item }}`` variable.
.. note:: This is for making console output more readable, not protecting sensitive data. If there is sensitive data in ``loop``, set ``no_log: yes`` on the task to prevent disclosure.
Pausing within a loop
---------------------
.. versionadded:: 2.2
To control the time (in seconds) between the execution of each item in a task loop, use the ``pause`` directive with ``loop_control``::
# main.yml
- name: create servers, pause 3s before creating next
digital_ocean:
name: "{{ item }}"
state: present
loop:
- server1
- server2
loop_control:
pause: 3
Tracking progress through a loop with ``index_var``
---------------------------------------------------
.. versionadded:: 2.5
To keep track of where you are in a loop, use the ``index_var`` directive with ``loop_control``. This directive specifies a variable name to contain the current loop index::
- name: count our fruit
debug:
msg: "{{ item }} with index {{ my_idx }}"
loop:
- apple
- banana
- pear
loop_control:
index_var: my_idx
Defining inner and outer variable names with ``loop_var``
---------------------------------------------------------
.. versionadded:: 2.1
You can nest two looping tasks using ``include_tasks``. However, by default Ansible sets the loop variable ``item`` for each loop. This means the inner, nested loop will overwrite the value of ``item`` from the outer loop.
You can specify the name of the variable for each loop using ``loop_var`` with ``loop_control``::
# main.yml
- include_tasks: inner.yml
loop:
- 1
- 2
- 3
loop_control:
loop_var: outer_item
# inner.yml
- debug:
msg: "outer item={{ outer_item }} inner item={{ item }}"
loop:
- a
- b
- c
.. note:: If Ansible detects that the current loop is using a variable which has already been defined, it will raise an error to fail the task.
Extended loop variables
-----------------------
.. versionadded:: 2.8
As of Ansible 2.8 you can get extended loop information using the ``extended`` option to loop control. This option will expose the following information.
========================== ===========
Variable Description
-------------------------- -----------
``ansible_loop.allitems`` The list of all items in the loop
``ansible_loop.index`` The current iteration of the loop. (1 indexed)
``ansible_loop.index0`` The current iteration of the loop. (0 indexed)
``ansible_loop.revindex`` The number of iterations from the end of the loop (1 indexed)
``ansible_loop.revindex0`` The number of iterations from the end of the loop (0 indexed)
``ansible_loop.first`` ``True`` if first iteration
``ansible_loop.last`` ``True`` if last iteration
``ansible_loop.length`` The number of items in the loop
``ansible_loop.previtem`` The item from the previous iteration of the loop. Undefined during the first iteration.
``ansible_loop.nextitem`` The item from the following iteration of the loop. Undefined during the last iteration.
========================== ===========
::
loop_control:
extended: yes
Accessing the name of your loop_var
-----------------------------------
.. versionadded:: 2.8
As of Ansible 2.8 you can get the name of the value provided to ``loop_control.loop_var`` using the ``ansible_loop_var`` variable
For role authors, writing roles that allow loops, instead of dictating the required ``loop_var`` value, you can gather the value via::
"{{ lookup('vars', ansible_loop_var) }}"
.. _migrating_to_loop:
Migrating from with_X to loop
=============================
.. include:: shared_snippets/with2loop.txt
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Best practices in playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,933 |
Unclear how to avoid entering --api-key on CLI when publishing Ansible Galaxy Collections
|
##### SUMMARY
As I begin publishing multiple collections, I'm finding it annoying to have to specify `--api-key` on the command line every time I run:
ansible-galaxy collection publish ./geerlingguy-collection-1.2.3.tar.gz --api-key=[key goes here]
This not only requires me looking up the key in my password manager or on Galaxy, it also requires the key to be entered in plain text in a command that is then stored in my `history` file, unless I wipe it out.
Therefore, I'd like to specify the token via environment variable, or some other file that can be included. Ideally an environment variable, so I don't have to set up a configuration file for Ansible Galaxy (which is already the default server).
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
galaxy
##### ANSIBLE VERSION
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 3 2019, 23:06:46) [Clang 11.0.0 (clang-1100.0.33.12)]
```
##### CONFIGURATION
```paste below
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/Users/jgeerling/Dropbox/VMs/roles']
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
INTERPRETER_PYTHON(env: ANSIBLE_PYTHON_INTERPRETER) = /usr/local/bin/python3
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
There are two relevant documentation pages:
- [Galaxy Developer Guide](https://docs.ansible.com/ansible/latest/galaxy/dev_guide.html)
- [Using Collections](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#configuring-the-ansible-galaxy-client)
It seems that both refer to a `token` with regard to Galaxy / Automation Hub, but only the collections guide mentions `--api-key`, and only relating to CLI usage. It seems that the `--api-key` is the same as the `token`, so the verbiage can be quite confusing (maybe the CLI needs to be updated to allow `--token` if they are indeed the same thing?
It also looks like, according to the documentation, I might be able to set:
ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=[my token / api key]
But it also looks like that might be a magic variable that is only used if I add an entry for a galaxy server like `[galaxy_server.release_galaxy]` in my `ansible.cfg` file.
|
https://github.com/ansible/ansible/issues/65933
|
https://github.com/ansible/ansible/pull/65961
|
e19b94f43b62c011a07ba65db276ed54fb3f1082
|
0ca79a4234fa6cd21dc52a833d7978c0e32cf4ed
| 2019-12-17T23:53:44Z |
python
| 2020-01-07T19:57:47Z |
docs/docsite/rst/dev_guide/developing_collections.rst
|
.. _developing_collections:
**********************
Developing collections
**********************
Collections are a distribution format for Ansible content. You can use collections to package and distribute playbooks, roles, modules, and plugins.
You can publish and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
.. contents::
:local:
:depth: 2
.. _collection_structure:
Collection structure
====================
Collections follow a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy
and other tools need in order to package, build and publish the collection::
collection/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── modules/
│ │ └── module1.py
│ ├── inventory/
│ └── .../
├── README.md
├── roles/
│ ├── role1/
│ ├── role2/
│ └── .../
├── playbooks/
│ ├── files/
│ ├── vars/
│ ├── templates/
│ └── tasks/
└── tests/
.. note::
* Ansible only accepts ``.yml`` extensions for :file:`galaxy.yml`, and ``.md`` for the :file:`README` file and any files in the :file:`/docs` folder.
* See the `draft collection <https://github.com/bcoca/collection>`_ for an example of a full collection structure.
* Not all directories are currently in use. Those are placeholders for future features.
.. _galaxy_yml:
galaxy.yml
----------
A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact.
See :ref:`collections_galaxy_meta` for details.
.. _collections_doc_dir:
docs directory
---------------
Put general documentation for the collection here. Keep the specific documentation for plugins and modules embedded as Python docstrings. Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on. Use markdown and do not add subfolders.
Use ``ansible-doc`` to view documentation for plugins inside a collection:
.. code-block:: bash
ansible-doc -t lookup my_namespace.my_collection.lookup1
The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the namespace and ``my_collection`` is the collection name within that namespace.
.. note:: The Ansible collection namespace is defined in the ``galaxy.yml`` file and is not equivalent to the GitHub repository name.
.. _collections_plugin_dir:
plugins directory
------------------
Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by most plugins by using their FQCN. This is a way to distribute modules, lookups, filters, and so on, without having to import a role in every play.
Vars plugins are unsupported in collections. Cache plugins may be used in collections for fact caching, but are not supported for inventory plugins.
module_utils
^^^^^^^^^^^^
When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}``
The following example snippets show a Python and PowerShell module using both default Ansible ``module_utils`` and
those provided by a collection. In this example the namespace is ``ansible_example``, the collection is ``community``.
In the Python example the ``module_util`` in question is called ``qradar`` such that the FQCN is
``ansible_example.community.plugins.module_utils.qradar``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.ansible_example.community.plugins.module_utils.qradar import QRadarRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
qradar_request = QRadarRequest(
module,
headers={"Content-Type": "application/json"},
not_rest_data_keys=['state']
)
Note that importing something from an ``__init__.py`` file requires using the file name:
.. code-block:: python
from ansible_collections.namespace.collection_name.plugins.callback.__init__ import CustomBaseClass
In the PowerShell example the ``module_util`` in question is called ``hyperv`` such that the FCQN is
``ansible_example.community.plugins.module_utils.hyperv``:
.. code-block:: powershell
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -PowerShell ansible_collections.ansible_example.community.plugins.module_utils.hyperv
$spec = @{
name = @{ required = $true; type = "str" }
state = @{ required = $true; choices = @("present", "absent") }
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
Invoke-HyperVFunction -Name $module.Params.name
$module.ExitJson()
.. _collections_roles_dir:
roles directory
----------------
Collection roles are mostly the same as existing roles, but with a couple of limitations:
- Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character.
- Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection.
The directory name of the role is used as the role name. Therefore, the directory name must comply with the
above role name rules.
The collection import into Galaxy will fail if a role name does not comply with these rules.
You can migrate 'traditional roles' into a collection but they must follow the rules above. You may need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories.
.. note::
For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's
metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a
collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value.
playbooks directory
--------------------
TBD.
tests directory
----------------
TBD. Expect tests for the collection itself to reside here.
.. _creating_collections:
Creating collections
======================
To create a collection:
#. Initialize a collection with :ref:`ansible-galaxy collection init<creating_collections_skeleton>` to create the skeleton directory structure.
#. Add your content to the collection.
#. Build the collection into a collection artifact with :ref:`ansible-galaxy collection build<building_collections>`.
#. Publish the collection artifact to Galaxy with :ref:`ansible-galaxy collection publish<publishing_collections>`.
A user can then install your collection on their systems.
Currently the ``ansible-galaxy collection`` command implements the following sub commands:
* ``init``: Create a basic collection skeleton based on the default template included with Ansible or your own template.
* ``build``: Create a collection artifact that can be uploaded to Galaxy or your own repository.
* ``publish``: Publish a built collection artifact to Galaxy.
* ``install``: Install one or more collections.
To learn more about the ``ansible-galaxy`` cli tool, see the :ref:`ansible-galaxy` man page.
.. _creating_collections_skeleton:
Creating a collection skeleton
------------------------------
To start a new collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection init my_namespace.my_collection
Then you can populate the directories with the content you want inside the collection. See
https://github.com/bcoca/collection to get a better idea of what you can place inside a collection.
.. _building_collections:
Building collections
--------------------
To build a collection, run ``ansible-galaxy collection build`` from inside the root directory of the collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection build
This creates a tarball of the built collection in the current directory which can be uploaded to Galaxy.::
my_collection/
├── galaxy.yml
├── ...
├── my_namespace-my_collection-1.0.0.tar.gz
└── ...
.. note::
* Certain files and folders are excluded when building the collection artifact. See :ref:`ignoring_files_and_folders_collections` to exclude other files you would not wish to distribute.
* If you used the now-deprecated ``Mazer`` tool for any of your collections, delete any and all files it added to your :file:`releases/` directory before you build your collection with ``ansible-galaxy``.
* The current Galaxy maximum tarball size is 2 MB.
This tarball is mainly intended to upload to Galaxy
as a distribution method, but you can use it directly to install the collection on target systems.
.. _ignoring_files_and_folders_collections:
Ignoring files and folders
^^^^^^^^^^^^^^^^^^^^^^^^^^
By default the build step will include all the files in the collection directory in the final build artifact except for the following:
* ``galaxy.yml``
* ``*.pyc``
* ``*.retry``
* ``tests/output``
* previously built artifacts in the root directory
* Various version control directories like ``.git/``
To exclude other files and folders when building the collection, you can set a list of file glob-like patterns in the
``build_ignore`` key in the collection's ``galaxy.yml`` file. These patterns use the following special characters for
wildcard matching:
* ``*``: Matches everything
* ``?``: Matches any single character
* ``[seq]``: Matches and character in seq
* ``[!seq]``:Matches any character not in seq
For example, if you wanted to exclude the :file:`sensitive` folder within the ``playbooks`` folder as well any ``.tar.gz`` archives you
can set the following in your ``galaxy.yml`` file:
.. code-block:: yaml
build_ignore:
- playbooks/sensitive
- '*.tar.gz'
.. note::
This feature is only supported when running ``ansible-galaxy collection build`` with Ansible 2.10 or newer.
.. _trying_collection_locally:
Trying collections locally
--------------------------
You can try your collection locally by installing it from the tarball. The following will enable an adjacent playbook to
access the collection:
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will
expect to find collections when attempting to use them. If you don't specify a path value, ``ansible-galaxy collection install``
installs the collection in the first path defined in :ref:`COLLECTIONS_PATHS`, which by default is ``~/.ansible/collections``.
Next, try using the local collection inside a playbook. For examples and more details see :ref:`Using collections <using_collections>`
.. _publishing_collections:
Publishing collections
----------------------
You can publish collections to Galaxy using the ``ansible-galaxy collection publish`` command or the Galaxy UI itself.
.. note:: Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before you upload it.
.. _galaxy_get_token:
Getting your token or API key
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload your collection to Galaxy, you must first obtain an API token (``--api-key`` in the ``ansible-galaxy`` CLI command). The API token is a secret token used to protect your content.
To get your API token:
* For galaxy, go to the `Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page and click :guilabel:`API token`.
* For Automation Hub, go to https://cloud.redhat.com/ansible/automation-hub/token/ and click :guilabel:`Get API token` from the version dropdown.
.. _upload_collection_ansible_galaxy:
Upload using ansible-galaxy
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note::
By default, ``ansible-galaxy`` uses https://galaxy.ansible.com as the Galaxy server (as listed in the :file:`ansible.cfg` file under :ref:`galaxy_server`). If you are only publishing your collection to Ansible Galaxy, you do not need any further configuration. If you are using Red Hat Automation Hub or any other Galaxy server, see :ref:`Configuring the ansible-galaxy client <galaxy_server_config>`.
To upload the collection artifact with the ``ansible-galaxy`` command:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET
The above command triggers an import process, just as if you uploaded the collection through the Galaxy website.
The command waits until the import process completes before reporting the status back. If you wish to continue
without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your
`My Imports <https://galaxy.ansible.com/my-imports/>`_ page.
The API key is a secret token used by the Galaxy server to protect your content. See :ref:`galaxy_get_token` for details.
.. _upload_collection_galaxy:
Upload a collection from the Galaxy website
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload your collection artifact directly on Galaxy:
#. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces.
#. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem.
When uploading collections it doesn't matter which namespace you select. The collection will be uploaded to the
namespace specified in the collection metadata in the ``galaxy.yml`` file. If you're not an owner of the
namespace, the upload request will fail.
Once Galaxy uploads and accepts a collection, you will be redirected to the **My Imports** page, which displays output from the
import process, including any errors or warnings about the metadata and content contained in the collection.
.. _collection_versions:
Collection versions
-------------------
Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before
uploading. The only way to change a collection is to release a new version. The latest version of a collection (by highest version number)
will be the version displayed everywhere in Galaxy; however, users will still be able to download older versions.
Collection versions use `Semantic Versioning <https://semver.org/>`_ for version numbers. Please read the official documentation for details and examples. In summary:
* Increment major (for example: x in `x.y.z`) version number for an incompatible API change.
* Increment minor (for example: y in `x.y.z`) version number for new functionality in a backwards compatible manner.
* Increment patch (for example: z in `x.y.z`) version number for backwards compatible bug fixes.
.. _migrate_to_collection:
Migrating Ansible content to a collection
=========================================
You can experiment with migrating existing modules into a collection using the `content_collector tool <https://github.com/ansible/content_collector>`_. The ``content_collector`` is a playbook that helps you migrate content from an Ansible distribution into a collection.
.. warning::
This tool is in active development and is provided only for experimentation and feedback at this point.
See the `content_collector README <https://github.com/ansible/content_collector>`_ for full details and usage guidelines.
.. seealso::
:ref:`collections`
Learn how to install and use collections.
:ref:`collections_galaxy_meta`
Understand the collections metadata structure.
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,933 |
Unclear how to avoid entering --api-key on CLI when publishing Ansible Galaxy Collections
|
##### SUMMARY
As I begin publishing multiple collections, I'm finding it annoying to have to specify `--api-key` on the command line every time I run:
ansible-galaxy collection publish ./geerlingguy-collection-1.2.3.tar.gz --api-key=[key goes here]
This not only requires me looking up the key in my password manager or on Galaxy, it also requires the key to be entered in plain text in a command that is then stored in my `history` file, unless I wipe it out.
Therefore, I'd like to specify the token via environment variable, or some other file that can be included. Ideally an environment variable, so I don't have to set up a configuration file for Ansible Galaxy (which is already the default server).
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
galaxy
##### ANSIBLE VERSION
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 3 2019, 23:06:46) [Clang 11.0.0 (clang-1100.0.33.12)]
```
##### CONFIGURATION
```paste below
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/Users/jgeerling/Dropbox/VMs/roles']
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
INTERPRETER_PYTHON(env: ANSIBLE_PYTHON_INTERPRETER) = /usr/local/bin/python3
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
There are two relevant documentation pages:
- [Galaxy Developer Guide](https://docs.ansible.com/ansible/latest/galaxy/dev_guide.html)
- [Using Collections](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#configuring-the-ansible-galaxy-client)
It seems that both refer to a `token` with regard to Galaxy / Automation Hub, but only the collections guide mentions `--api-key`, and only relating to CLI usage. It seems that the `--api-key` is the same as the `token`, so the verbiage can be quite confusing (maybe the CLI needs to be updated to allow `--token` if they are indeed the same thing?
It also looks like, according to the documentation, I might be able to set:
ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=[my token / api key]
But it also looks like that might be a magic variable that is only used if I add an entry for a galaxy server like `[galaxy_server.release_galaxy]` in my `ansible.cfg` file.
|
https://github.com/ansible/ansible/issues/65933
|
https://github.com/ansible/ansible/pull/65961
|
e19b94f43b62c011a07ba65db276ed54fb3f1082
|
0ca79a4234fa6cd21dc52a833d7978c0e32cf4ed
| 2019-12-17T23:53:44Z |
python
| 2020-01-07T19:57:47Z |
docs/docsite/rst/shared_snippets/galaxy_server_list.txt
|
By default, ``ansible-galaxy`` uses https://galaxy.ansible.com as the Galaxy server (as listed in the :file:`ansible.cfg` file under :ref:`galaxy_server`).
You can configure this to use other servers (such as Red Hat Automation Hub or a custom Galaxy server) as follows:
* Set the server list in the :ref:`galaxy_server_list` configuration option in :ref:`ansible_configuration_settings_locations`.
* Use the ``--server`` command line argument to limit to an individual server.
To configure a Galaxy server list in ``ansible.cfg``:
#. Add the ``server_list`` option under the ``[galaxy]`` section to one or more server names.
#. Create a new section for each server name.
#. Set the ``url`` option for each server name.
.. note::
The ``url`` option for each server name must end with a forward slash ``/``.
For Automation Hub, you additionally need to:
#. Set the ``auth_url`` option for each server name.
#. Set the API token for each server name. Go to https://cloud.redhat.com/ansible/automation-hub/token/ and click ::guilabel:`Get API token` from the version dropdown to copy your API token.
The following example shows how to configure multiple servers:
.. code-block:: ini
[galaxy]
server_list = automation_hub, my_org_hub, release_galaxy, test_galaxy
[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token
[galaxy_server.my_org_hub]
url=https://automation.my_org/
username=my_user
password=my_pass
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
token=my_token
[galaxy_server.test_galaxy]
url=https://galaxy-dev.ansible.com/
token=my_test_token
.. note::
You can use the ``--server`` command line argument to select an explicit Galaxy server in the ``server_list`` and
the value of this argument should match the name of the server. To use a server not in the server list, set the value to the URL to access that server (all servers in the server list will be ignored). Also the ``--api-key`` argument is not applied to any of the predefined servers. It is only applied
if no server list is defined or a URL was specified by ``--server``.
**Galaxy server list configuration options**
The :ref:`galaxy_server_list` option is a list of server identifiers in a prioritized order. When searching for a
collection, the install process will search in that order, e.g. ``automation_hub`` first, then ``my_org_hub``, ``release_galaxy``, and
finally ``test_galaxy`` until the collection is found. The actual Galaxy instance is then defined under the section
``[galaxy_server.{{ id }}]`` where ``{{ id }}`` is the server identifier defined in the list. This section can then
define the following keys:
* ``url``: The URL of the galaxy instance to connect to, this is required.
* ``token``: A token key to use for authentication against the Galaxy instance, this is mutually exclusive with ``username``
* ``username``: The username to use for basic authentication against the Galaxy instance, this is mutually exclusive with ``token``
* ``password``: The password to use for basic authentication
* ``auth_url``: The URL of a Keycloak server 'token_endpoint' if using SSO auth (Automation Hub for ex). This is mutually exclusive with ``username``. ``auth_url`` requires ``token``.
As well as being defined in the ``ansible.cfg`` file, these server options can be defined as an environment variable.
The environment variable is in the form ``ANSIBLE_GALAXY_SERVER_{{ id }}_{{ key }}`` where ``{{ id }}`` is the upper
case form of the server identifier and ``{{ key }}`` is the key to define. For example I can define ``token`` for
``release_galaxy`` by setting ``ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=secret_token``.
For operations where only one Galaxy server is used, i.e. ``publish``, ``info``, ``login`` then the first entry in the
``server_list`` is used unless an explicit server was passed in as a command line argument.
.. note::
Once a collection is found, any of its requirements are only searched within the same Galaxy instance as the parent
collection. The install process will not search for a collection requirement in a different Galaxy instance.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,933 |
Unclear how to avoid entering --api-key on CLI when publishing Ansible Galaxy Collections
|
##### SUMMARY
As I begin publishing multiple collections, I'm finding it annoying to have to specify `--api-key` on the command line every time I run:
ansible-galaxy collection publish ./geerlingguy-collection-1.2.3.tar.gz --api-key=[key goes here]
This not only requires me looking up the key in my password manager or on Galaxy, it also requires the key to be entered in plain text in a command that is then stored in my `history` file, unless I wipe it out.
Therefore, I'd like to specify the token via environment variable, or some other file that can be included. Ideally an environment variable, so I don't have to set up a configuration file for Ansible Galaxy (which is already the default server).
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
galaxy
##### ANSIBLE VERSION
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/jgeerling/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 3 2019, 23:06:46) [Clang 11.0.0 (clang-1100.0.33.12)]
```
##### CONFIGURATION
```paste below
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/Users/jgeerling/Dropbox/VMs/roles']
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
INTERPRETER_PYTHON(env: ANSIBLE_PYTHON_INTERPRETER) = /usr/local/bin/python3
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
There are two relevant documentation pages:
- [Galaxy Developer Guide](https://docs.ansible.com/ansible/latest/galaxy/dev_guide.html)
- [Using Collections](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#configuring-the-ansible-galaxy-client)
It seems that both refer to a `token` with regard to Galaxy / Automation Hub, but only the collections guide mentions `--api-key`, and only relating to CLI usage. It seems that the `--api-key` is the same as the `token`, so the verbiage can be quite confusing (maybe the CLI needs to be updated to allow `--token` if they are indeed the same thing?
It also looks like, according to the documentation, I might be able to set:
ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=[my token / api key]
But it also looks like that might be a magic variable that is only used if I add an entry for a galaxy server like `[galaxy_server.release_galaxy]` in my `ansible.cfg` file.
|
https://github.com/ansible/ansible/issues/65933
|
https://github.com/ansible/ansible/pull/65961
|
e19b94f43b62c011a07ba65db276ed54fb3f1082
|
0ca79a4234fa6cd21dc52a833d7978c0e32cf4ed
| 2019-12-17T23:53:44Z |
python
| 2020-01-07T19:57:47Z |
docs/docsite/rst/shared_snippets/installing_collections.txt
|
You can use the ``ansible-galaxy collection install`` command to install a collection on your system.
.. note::
By default, ``ansible-galaxy`` uses https://galaxy.ansible.com as the Galaxy server (as listed in the :file:`ansible.cfg` file under :ref:`galaxy_server`). You do not need any further configuration. See :ref:`Configuring the ansible-galaxy client <galaxy_server_config>` if you are using any other Galaxy server, such as Red Hat Automation Hub).
To install a collection hosted in Galaxy:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection
You can also directly use the tarball from your build:
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
.. note::
The install command automatically appends the path ``ansible_collections`` to the one specified with the ``-p`` option unless the
parent directory is already in a folder called ``ansible_collections``.
When using the ``-p`` option to specify the install path, use one of the values configured in :ref:`COLLECTIONS_PATHS`, as this is
where Ansible itself will expect to find collections. If you don't specify a path, ``ansible-galaxy collection install`` installs
the collection to the first path defined in :ref:`COLLECTIONS_PATHS`, which by default is ``~/.ansible/collections``
You can also keep a collection adjacent to the current playbook, under a ``collections/ansible_collections/`` directory structure.
.. code-block:: text
play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
See :ref:`collection_structure` for details on the collection directory structure.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,281 |
ansible-galaxy makes URL requests that always cause unnecessary redirects
|
##### SUMMARY
<!--- Explain the problem briefly below -->
For a collection install command like `ansible-galaxy collection install -s automation_hub_ci google.cloud`, ansible-galaxy ends up making requests to API endpoints that should end with a trailing slash but ansible-galaxy drops the trailing slash.
On the server side, galaxy issues a 302 redirect to the location with the URL with the trailing slash appended. And ansible-galaxy follows it correctly.
For example,
ansible-galaxy makes the request:
`https://ci.cloud.redhat.com/api/automation-hub/v3/collections/cloud/google/versions`
which returns a 302 redirect response pointing to:
`https://ci.cloud.redhat.com/api/automation-hub/v3/collections/cloud/google/versions/'
But the redirect is superfluous and slows things down a bit. Depending on conditions, each redirect may take between 100ms and 500ms.
But that's 100-500ms additional per collection being installed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /home/adrian/src/ansible/ansible.cfg
configured module search path = ['/home/adrian/ansible/my-modules']
ansible python module location = /home/adrian/src/ansible/lib/ansible
executable location = /home/adrian/src/ansible/bin/ansible
python version = 3.6.9 (default, Jul 3 2019, 17:57:57) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
ansible-ah-example.cfg:
```paste below
[defaults]
log_path = $HOME/.ansible/log/ansible.log
[galaxy]
server_list = automation_hub_ci, galaxy
[galaxy_server.automation_hub_ci]
url=https://ci.cloud.redhat.com/api/automation-hub/
# auth_url=https://sso.qa.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
# Fill in your auth info here
# username=
# password=
# token=
[galaxy_server.galaxy]
url=https://galaxy.ansible.com/api/
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
fedora 30-ish, x86_64
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
`ANSIBLE_CONFIG=ansible-ah-example.cfg ansible-galaxy collection install -s automation_hub_ci google.cloud`
You may need to use a web debugging tool like mitmproxy to see the additional
requests. AFAIK, that info isn't shown any any ansible log/verbose/debug level.
##### EXPECTED RESULTS
One request per galaxy resources needed
##### ACTUAL RESULTS
A request to the wrong url (no trailing slash), that is then followed to correct location
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63281
|
https://github.com/ansible/ansible/pull/63294
|
66550cd3c27cd1bb72c46a573d055803e7d8f1f0
|
d1c39b90686d10d4c561ebf96ca72db2e81fe3ff
| 2019-10-09T14:47:14Z |
python
| 2020-01-08T20:25:40Z |
lib/ansible/galaxy/api.py
|
# (C) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import uuid
import time
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlencode, urlparse
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
try:
from urllib.parse import urlparse
except ImportError:
# Python 2
from urlparse import urlparse
display = Display()
def g_connect(versions):
"""
Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the
endpoint.
:param versions: A list of API versions that the function supports.
"""
def decorator(method):
def wrapped(self, *args, **kwargs):
if not self._available_api_versions:
display.vvvv("Initial connection to galaxy_server: %s" % self.api_server)
# Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer
# auth for Automation Hub.
n_url = self.api_server
error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url)
if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/':
n_url = 'https://galaxy.ansible.com/api/'
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg)
except (AnsibleError, GalaxyError, ValueError, KeyError):
# Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API
# root (not JSON, no 'available_versions') so try appending '/api/'
n_url = _urljoin(n_url, '/api/')
# let exceptions here bubble up
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg)
if 'available_versions' not in data:
raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available on %s"
% (n_url, self.api_server))
# Update api_server to point to the "real" API root, which in this case
# was the configured url + '/api/' appended.
self.api_server = n_url
# Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though
# it isn't returned in the available_versions dict.
available_versions = data.get('available_versions', {u'v1': u'v1/'})
if list(available_versions.keys()) == [u'v1']:
available_versions[u'v2'] = u'v2/'
self._available_api_versions = available_versions
display.vvvv("Found API version '%s' with Galaxy server %s (%s)"
% (', '.join(available_versions.keys()), self.name, self.api_server))
# Verify that the API versions the function works with are available on the server specified.
available_versions = set(self._available_api_versions.keys())
common_versions = set(versions).intersection(available_versions)
if not common_versions:
raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s"
% (method.__name__, ", ".join(versions), ", ".join(available_versions),
self.name, self.api_server))
return method(self, *args, **kwargs)
return wrapped
return decorator
def _urljoin(*args):
return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a)
class GalaxyError(AnsibleError):
""" Error for bad Galaxy server responses. """
def __init__(self, http_error, message):
super(GalaxyError, self).__init__(message)
self.http_code = http_error.code
self.url = http_error.geturl()
try:
http_msg = to_text(http_error.read())
err_info = json.loads(http_msg)
except (AttributeError, ValueError):
err_info = {}
url_split = self.url.split('/')
if 'v2' in url_split:
galaxy_msg = err_info.get('message', http_error.reason)
code = err_info.get('code', 'Unknown')
full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code)
elif 'v3' in url_split:
errors = err_info.get('errors', [])
if not errors:
errors = [{}] # Defaults are set below, we just need to make sure 1 error is present.
message_lines = []
for error in errors:
error_msg = error.get('detail') or error.get('title') or http_error.reason
error_code = error.get('code') or 'Unknown'
message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code)
message_lines.append(message_line)
full_error_msg = "%s %s" % (message, ', '.join(message_lines))
else:
# v1 and unknown API endpoints
galaxy_msg = err_info.get('default', http_error.reason)
full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg)
self.message = to_native(full_error_msg)
class CollectionVersionMetadata:
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies):
"""
Contains common information about a collection on a Galaxy server to smooth through API differences for
Collection and define a standard meta info for a collection.
:param namespace: The namespace name.
:param name: The collection name.
:param version: The version that the metadata refers to.
:param download_url: The URL to download the collection.
:param artifact_sha256: The SHA256 of the collection artifact for later verification.
:param dependencies: A dict of dependencies of the collection.
"""
self.namespace = namespace
self.name = name
self.version = version
self.download_url = download_url
self.artifact_sha256 = artifact_sha256
self.dependencies = dependencies
class GalaxyAPI:
""" This class is meant to be used as a API client for an Ansible Galaxy server """
def __init__(self, galaxy, name, url, username=None, password=None, token=None):
self.galaxy = galaxy
self.name = name
self.username = username
self.password = password
self.token = token
self.api_server = url
self.validate_certs = not context.CLIARGS['ignore_certs']
self._available_api_versions = {}
display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs))
@property
@g_connect(['v1', 'v2', 'v3'])
def available_api_versions(self):
# Calling g_connect will populate self._available_api_versions
return self._available_api_versions
def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None):
headers = headers or {}
self._add_auth_token(headers, url, required=auth_required)
try:
display.vvvv("Calling Galaxy at %s" % url)
resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers,
method=method, timeout=20, http_agent=user_agent())
except HTTPError as e:
raise GalaxyError(e, error_context_msg)
except Exception as e:
raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e)))
resp_data = to_text(resp.read(), errors='surrogate_or_strict')
try:
data = json.loads(resp_data)
except ValueError:
raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s"
% (resp.url, to_native(resp_data)))
return data
def _add_auth_token(self, headers, url, token_type=None, required=False):
# Don't add the auth token if one is already present
if 'Authorization' in headers:
return
if not self.token and required:
raise AnsibleError("No access token or username set. A token can be set with --api-key, with "
"'ansible-galaxy login', or set in ansible.cfg.")
if self.token:
headers.update(self.token.headers())
@g_connect(['v1'])
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/'
args = urlencode({"github_token": github_token})
resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST", http_agent=user_agent())
data = json.loads(to_text(resp.read(), errors='surrogate_or_strict'))
return data
@g_connect(['v1'])
def create_import_task(self, github_user, github_repo, reference=None, role_name=None):
"""
Post an import request
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/'
args = {
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
}
if role_name:
args['alternate_role_name'] = role_name
elif github_repo.startswith('ansible-role'):
args['alternate_role_name'] = github_repo[len('ansible-role') + 1:]
data = self._call_galaxy(url, args=urlencode(args), method="POST")
if data.get('results', None):
return data['results']
return data
@g_connect(['v1'])
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports")
if task_id is not None:
url = "%s?id=%d" % (url, task_id)
elif github_user is not None and github_repo is not None:
url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self._call_galaxy(url)
return data['results']
@g_connect(['v1'])
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name.
"""
role_name = to_text(urlquote(to_bytes(role_name)))
try:
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except Exception:
raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles",
"?owner__username=%s&name=%s" % (user_name, role_name))
data = self._call_galaxy(url)
if len(data["results"]) != 0:
return data["results"][0]
return None
@g_connect(['v1'])
def fetch_role_related(self, related, role_id):
"""
Fetch the list of related items for the given role.
The url comes from the 'related' field of the role.
"""
results = []
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related,
"?page_size=50")
data = self._call_galaxy(url)
results = data['results']
done = (data.get('next_link', None) is None)
# https://github.com/ansible/ansible/issues/64355
# api_server contains part of the API path but next_link includes the the /api part so strip it out.
url_info = urlparse(self.api_server)
base_url = "%s://%s/" % (url_info.scheme, url_info.netloc)
while not done:
url = _urljoin(base_url, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
except Exception as e:
display.warning("Unable to retrieve role (id=%s) data (%s), but this is not fatal so we continue: %s"
% (role_id, related, to_text(e)))
return results
@g_connect(['v1'])
def get_list(self, what):
"""
Fetch the list of items specified.
"""
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size")
data = self._call_galaxy(url)
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next_link', None) is None)
while not done:
url = _urljoin(self.api_server, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error)))
@g_connect(['v1'])
def search_roles(self, search, **kwargs):
search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?")
if search:
search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search)))
tags = kwargs.get('tags', None)
platforms = kwargs.get('platforms', None)
page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, string_types):
tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, string_types):
platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
if page_size:
search_url += '&page_size=%s' % page_size
if author:
search_url += '&username_autocomplete=%s' % author
data = self._call_galaxy(search_url)
return data
@g_connect(['v1'])
def add_secret(self, source, github_user, github_repo, secret):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/'
args = urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self._call_galaxy(url, args=args, method="POST")
return data
@g_connect(['v1'])
def list_secrets(self):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets")
data = self._call_galaxy(url, auth_required=True)
return data
@g_connect(['v1'])
def remove_secret(self, secret_id):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/'
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
@g_connect(['v1'])
def delete_role(self, github_user, github_repo):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole",
"?github_user=%s&github_repo=%s" % (github_user, github_repo))
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
# Collection APIs #
@g_connect(['v2', 'v3'])
def publish_collection(self, collection_path):
"""
Publishes a collection to a Galaxy server and returns the import task URI.
:param collection_path: The path to the collection tarball to publish.
:return: The import task URI that contains the import results.
"""
display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server))
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path))
elif not tarfile.is_tarfile(b_collection_path):
raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection "
"build' to create a proper release artifact." % to_native(collection_path))
with open(b_collection_path, 'rb') as collection_tar:
data = collection_tar.read()
boundary = '--------------------------%s' % uuid.uuid4().hex
b_file_name = os.path.basename(b_collection_path)
part_boundary = b"--" + to_bytes(boundary, errors='surrogate_or_strict')
form = [
part_boundary,
b"Content-Disposition: form-data; name=\"sha256\"",
to_bytes(secure_hash_s(data), errors='surrogate_or_strict'),
part_boundary,
b"Content-Disposition: file; name=\"file\"; filename=\"%s\"" % b_file_name,
b"Content-Type: application/octet-stream",
b"",
data,
b"%s--" % part_boundary,
]
data = b"\r\n".join(form)
headers = {
'Content-type': 'multipart/form-data; boundary=%s' % boundary,
'Content-length': len(data),
}
if 'v3' in self.available_api_versions:
n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/'
else:
n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/'
resp = self._call_galaxy(n_url, args=data, headers=headers, method='POST', auth_required=True,
error_context_msg='Error when publishing collection to %s (%s)'
% (self.name, self.api_server))
return resp['task']
@g_connect(['v2', 'v3'])
def wait_import_task(self, task_id, timeout=0):
"""
Waits until the import process on the Galaxy server has completed or the timeout is reached.
:param task_id: The id of the import task to wait for. This can be parsed out of the return
value for GalaxyAPI.publish_collection.
:param timeout: The timeout in seconds, 0 is no timeout.
"""
# TODO: actually verify that v3 returns the same structure as v2, right now this is just an assumption.
state = 'waiting'
data = None
# Construct the appropriate URL per version
if 'v3' in self.available_api_versions:
full_url = _urljoin(self.api_server, self.available_api_versions['v3'],
'imports/collections', task_id, '/')
else:
# TODO: Should we have a trailing slash here? I'm working with what the unittests ask
# for but a trailing slash may be more correct
full_url = _urljoin(self.api_server, self.available_api_versions['v2'],
'collection-imports', task_id)
display.display("Waiting until Galaxy import task %s has completed" % full_url)
start = time.time()
wait = 2
while timeout == 0 or (time.time() - start) < timeout:
data = self._call_galaxy(full_url, method='GET', auth_required=True,
error_context_msg='Error when getting import task results at %s' % full_url)
state = data.get('state', 'waiting')
if data.get('finished_at', None):
break
display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again'
% (state, wait))
time.sleep(wait)
# poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds.
wait = min(30, wait * 1.5)
if state == 'waiting':
raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'"
% to_native(full_url))
for message in data.get('messages', []):
level = message['level']
if level == 'error':
display.error("Galaxy import error message: %s" % message['message'])
elif level == 'warning':
display.warning("Galaxy import warning message: %s" % message['message'])
else:
display.vvv("Galaxy import message: %s - %s" % (level, message['message']))
if state == 'failed':
code = to_native(data['error'].get('code', 'UNKNOWN'))
description = to_native(
data['error'].get('description', "Unknown error, see %s for more details" % full_url))
raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code))
@g_connect(['v2', 'v3'])
def get_collection_version_metadata(self, namespace, name, version):
"""
Gets the collection information from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Optional version of the collection to get the information for.
:return: CollectionVersionMetadata about the collection at the version requested.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version]
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg)
return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'],
data['download_url'], data['artifact']['sha256'],
data['metadata']['dependencies'])
@g_connect(['v2', 'v3'])
def get_collection_versions(self, namespace, name):
"""
Gets a list of available versions for a collection on a Galaxy server.
:param namespace: The collection namespace.
:param name: The collection name.
:return: A list of versions that are available.
"""
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
results_key = 'data'
pagination_path = ['links', 'next']
else:
api_path = self.available_api_versions['v2']
results_key = 'results'
pagination_path = ['next']
n_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions')
error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
data = self._call_galaxy(n_url, error_context_msg=error_context_msg)
versions = []
while True:
versions += [v['version'] for v in data[results_key]]
next_link = data
for path in pagination_path:
next_link = next_link.get(path, {})
if not next_link:
break
data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'),
error_context_msg=error_context_msg)
return versions
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,281 |
ansible-galaxy makes URL requests that always cause unnecessary redirects
|
##### SUMMARY
<!--- Explain the problem briefly below -->
For a collection install command like `ansible-galaxy collection install -s automation_hub_ci google.cloud`, ansible-galaxy ends up making requests to API endpoints that should end with a trailing slash but ansible-galaxy drops the trailing slash.
On the server side, galaxy issues a 302 redirect to the location with the URL with the trailing slash appended. And ansible-galaxy follows it correctly.
For example,
ansible-galaxy makes the request:
`https://ci.cloud.redhat.com/api/automation-hub/v3/collections/cloud/google/versions`
which returns a 302 redirect response pointing to:
`https://ci.cloud.redhat.com/api/automation-hub/v3/collections/cloud/google/versions/'
But the redirect is superfluous and slows things down a bit. Depending on conditions, each redirect may take between 100ms and 500ms.
But that's 100-500ms additional per collection being installed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /home/adrian/src/ansible/ansible.cfg
configured module search path = ['/home/adrian/ansible/my-modules']
ansible python module location = /home/adrian/src/ansible/lib/ansible
executable location = /home/adrian/src/ansible/bin/ansible
python version = 3.6.9 (default, Jul 3 2019, 17:57:57) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
ansible-ah-example.cfg:
```paste below
[defaults]
log_path = $HOME/.ansible/log/ansible.log
[galaxy]
server_list = automation_hub_ci, galaxy
[galaxy_server.automation_hub_ci]
url=https://ci.cloud.redhat.com/api/automation-hub/
# auth_url=https://sso.qa.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
# Fill in your auth info here
# username=
# password=
# token=
[galaxy_server.galaxy]
url=https://galaxy.ansible.com/api/
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
fedora 30-ish, x86_64
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
`ANSIBLE_CONFIG=ansible-ah-example.cfg ansible-galaxy collection install -s automation_hub_ci google.cloud`
You may need to use a web debugging tool like mitmproxy to see the additional
requests. AFAIK, that info isn't shown any any ansible log/verbose/debug level.
##### EXPECTED RESULTS
One request per galaxy resources needed
##### ACTUAL RESULTS
A request to the wrong url (no trailing slash), that is then followed to correct location
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63281
|
https://github.com/ansible/ansible/pull/63294
|
66550cd3c27cd1bb72c46a573d055803e7d8f1f0
|
d1c39b90686d10d4c561ebf96ca72db2e81fe3ff
| 2019-10-09T14:47:14Z |
python
| 2020-01-08T20:25:40Z |
test/units/galaxy/test_api.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import re
import pytest
import tarfile
import tempfile
import time
from io import BytesIO, StringIO
from units.compat.mock import MagicMock
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy import api as galaxy_api
from ansible.galaxy.api import CollectionVersionMetadata, GalaxyAPI, GalaxyError
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six.moves.urllib import error as urllib_error
from ansible.utils import context_objects as co
from ansible.utils.display import Display
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
# Required to initialise the GalaxyAPI object
context.CLIARGS._store = {'ignore_certs': False}
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(tmp_path_factory):
''' Creates a collection artifact tarball that is ready to be published '''
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output'))
tar_path = os.path.join(output_dir, 'namespace-collection-v1.0.0.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
yield tar_path
def get_test_galaxy_api(url, version, token_ins=None, token_value=None):
token_value = token_value or "my token"
token_ins = token_ins or GalaxyToken(token_value)
api = GalaxyAPI(None, "test", url)
# Warning, this doesn't test g_connect() because _availabe_api_versions is set here. That means
# that urls for v2 servers have to append '/api/' themselves in the input data.
api._available_api_versions = {version: '%s' % version}
api.token = token_ins
return api
def test_api_no_auth():
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = {}
api._add_auth_token(actual, "")
assert actual == {}
def test_api_no_auth_but_required():
expected = "No access token or username set. A token can be set with --api-key, with 'ansible-galaxy login', " \
"or set in ansible.cfg."
with pytest.raises(AnsibleError, match=expected):
GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")._add_auth_token({}, "", required=True)
def test_api_token_auth():
token = GalaxyToken(token=u"my_token")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Token my_token'}
def test_api_token_auth_with_token_type(monkeypatch):
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", token_type="Bearer", required=True)
assert actual == {'Authorization': 'Bearer my_token'}
def test_api_token_auth_with_v3_url(monkeypatch):
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "https://galaxy.ansible.com/api/v3/resource/name", required=True)
assert actual == {'Authorization': 'Bearer my_token'}
def test_api_token_auth_with_v2_url():
token = GalaxyToken(token=u"my_token")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
# Add v3 to random part of URL but response should only see the v2 as the full URI path segment.
api._add_auth_token(actual, "https://galaxy.ansible.com/api/v2/resourcev3/name", required=True)
assert actual == {'Authorization': 'Token my_token'}
def test_api_basic_auth_password():
token = BasicAuthToken(username=u"user", password=u"pass")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Basic dXNlcjpwYXNz'}
def test_api_basic_auth_no_password():
token = BasicAuthToken(username=u"user")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Basic dXNlcjo='}
def test_api_dont_override_auth_header():
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = {'Authorization': 'Custom token'}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Custom token'}
def test_initialise_galaxy(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/"}}'),
StringIO(u'{"token":"my token"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = api.authenticate("github_token")
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v1'] == u'v1/'
assert api.available_api_versions['v2'] == u'v2/'
assert actual == {u'token': u'my token'}
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/'
assert 'ansible-galaxy' in mock_open.mock_calls[1][2]['http_agent']
assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token'
def test_initialise_galaxy_with_auth(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/"}}'),
StringIO(u'{"token":"my token"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token'))
actual = api.authenticate("github_token")
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v1'] == u'v1/'
assert api.available_api_versions['v2'] == u'v2/'
assert actual == {u'token': u'my token'}
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/'
assert 'ansible-galaxy' in mock_open.mock_calls[1][2]['http_agent']
assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token'
def test_initialise_automation_hub(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v2": "v2/", "v3":"v3/"}}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v2'] == u'v2/'
assert api.available_api_versions['v3'] == u'v3/'
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
assert mock_open.mock_calls[0][2]['headers'] == {'Authorization': 'Bearer my_token'}
def test_initialise_unknown(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
urllib_error.HTTPError('https://galaxy.ansible.com/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')),
urllib_error.HTTPError('https://galaxy.ansible.com/api/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token'))
expected = "Error when finding available api versions from test (%s) (HTTP Code: 500, Message: msg)" \
% api.api_server
with pytest.raises(AnsibleError, match=re.escape(expected)):
api.authenticate("github_token")
def test_get_available_api_versions(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/","v2":"v2/"}}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = api.available_api_versions
assert len(actual) == 2
assert actual['v1'] == u'v1/'
assert actual['v2'] == u'v2/'
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert 'ansible-galaxy' in mock_open.mock_calls[0][2]['http_agent']
def test_publish_collection_missing_file():
fake_path = u'/fake/ÅÑŚÌβŁÈ/path'
expected = to_native("The collection path specified '%s' does not exist." % fake_path)
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2")
with pytest.raises(AnsibleError, match=expected):
api.publish_collection(fake_path)
def test_publish_collection_not_a_tarball():
expected = "The collection path specified '{0}' is not a tarball, use 'ansible-galaxy collection build' to " \
"create a proper release artifact."
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2")
with tempfile.NamedTemporaryFile(prefix=u'ÅÑŚÌβŁÈ') as temp_file:
temp_file.write(b"\x00")
temp_file.flush()
with pytest.raises(AnsibleError, match=expected.format(to_native(temp_file.name))):
api.publish_collection(temp_file.name)
def test_publish_collection_unsupported_version():
expected = "Galaxy action publish_collection requires API versions 'v2, v3' but only 'v1' are available on test " \
"https://galaxy.ansible.com/api/"
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v1")
with pytest.raises(AnsibleError, match=expected):
api.publish_collection("path")
@pytest.mark.parametrize('api_version, collection_url', [
('v2', 'collections'),
('v3', 'artifacts/collections'),
])
def test_publish_collection(api_version, collection_url, collection_artifact, monkeypatch):
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", api_version)
mock_call = MagicMock()
mock_call.return_value = {'task': 'http://task.url/'}
monkeypatch.setattr(api, '_call_galaxy', mock_call)
actual = api.publish_collection(collection_artifact)
assert actual == 'http://task.url/'
assert mock_call.call_count == 1
assert mock_call.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/%s/%s/' % (api_version, collection_url)
assert mock_call.mock_calls[0][2]['headers']['Content-length'] == len(mock_call.mock_calls[0][2]['args'])
assert mock_call.mock_calls[0][2]['headers']['Content-type'].startswith(
'multipart/form-data; boundary=--------------------------')
assert mock_call.mock_calls[0][2]['args'].startswith(b'--------------------------')
assert mock_call.mock_calls[0][2]['method'] == 'POST'
assert mock_call.mock_calls[0][2]['auth_required'] is True
@pytest.mark.parametrize('api_version, collection_url, response, expected', [
('v2', 'collections', {},
'Error when publishing collection to test (%s) (HTTP Code: 500, Message: msg Code: Unknown)'),
('v2', 'collections', {
'message': u'Galaxy error messäge',
'code': 'GWE002',
}, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Galaxy error messäge Code: GWE002)'),
('v3', 'artifact/collections', {},
'Error when publishing collection to test (%s) (HTTP Code: 500, Message: msg Code: Unknown)'),
('v3', 'artifact/collections', {
'errors': [
{
'code': 'conflict.collection_exists',
'detail': 'Collection "mynamespace-mycollection-4.1.1" already exists.',
'title': 'Conflict.',
'status': '400',
},
{
'code': 'quantum_improbability',
'title': u'Rändom(?) quantum improbability.',
'source': {'parameter': 'the_arrow_of_time'},
'meta': {'remediation': 'Try again before'},
},
],
}, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Collection '
u'"mynamespace-mycollection-4.1.1" already exists. Code: conflict.collection_exists), (HTTP Code: 500, '
u'Message: Rändom(?) quantum improbability. Code: quantum_improbability)')
])
def test_publish_failure(api_version, collection_url, response, expected, collection_artifact, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version)
expected_url = '%s/api/%s/%s' % (api.api_server, api_version, collection_url)
mock_open = MagicMock()
mock_open.side_effect = urllib_error.HTTPError(expected_url, 500, 'msg', {},
StringIO(to_text(json.dumps(response))))
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
with pytest.raises(GalaxyError, match=re.escape(to_native(expected % api.api_server))):
api.publish_collection(collection_artifact)
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.return_value = StringIO(u'{"state":"success","finished_at":"time"}')
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_multiple_requests(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"state":"test"}'),
StringIO(u'{"state":"success","finished_at":"time"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
monkeypatch.setattr(time, 'sleep', MagicMock())
api.wait_import_task(import_uri)
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][1][0] == full_import_uri
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == \
'Galaxy import process has a status of test, wait 2 seconds before trying again'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri,', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_with_failure(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'finished_at': 'some_time',
'state': 'failed',
'error': {
'code': 'GW001',
'description': u'Becäuse I said so!',
},
'messages': [
{
'level': 'error',
'message': u'Somé error',
},
{
'level': 'warning',
'message': u'Some wärning',
},
{
'level': 'info',
'message': u'Somé info',
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
mock_warn = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warn)
mock_err = MagicMock()
monkeypatch.setattr(Display, 'error', mock_err)
expected = to_native(u'Galaxy import process failed: Becäuse I said so! (Code: GW001)')
with pytest.raises(AnsibleError, match=re.escape(expected)):
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - Somé info'
assert mock_warn.call_count == 1
assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wärning'
assert mock_err.call_count == 1
assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: Somé error'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my_token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_with_failure_no_error(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'finished_at': 'some_time',
'state': 'failed',
'error': {},
'messages': [
{
'level': 'error',
'message': u'Somé error',
},
{
'level': 'warning',
'message': u'Some wärning',
},
{
'level': 'info',
'message': u'Somé info',
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
mock_warn = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warn)
mock_err = MagicMock()
monkeypatch.setattr(Display, 'error', mock_err)
expected = 'Galaxy import process failed: Unknown error, see %s for more details \\(Code: UNKNOWN\\)' % full_import_uri
with pytest.raises(AnsibleError, match=expected):
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - Somé info'
assert mock_warn.call_count == 1
assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wärning'
assert mock_err.call_count == 1
assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: Somé error'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_timeout(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
def return_response(*args, **kwargs):
return StringIO(u'{"state":"waiting"}')
mock_open = MagicMock()
mock_open.side_effect = return_response
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
monkeypatch.setattr(time, 'sleep', MagicMock())
expected = "Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % full_import_uri
with pytest.raises(AnsibleError, match=expected):
api.wait_import_task(import_uri, 1)
assert mock_open.call_count > 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][1][0] == full_import_uri
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
# expected_wait_msg = 'Galaxy import process has a status of waiting, wait {0} seconds before trying again'
assert mock_vvv.call_count > 9 # 1st is opening Galaxy token file.
# FIXME:
# assert mock_vvv.mock_calls[1][1][0] == expected_wait_msg.format(2)
# assert mock_vvv.mock_calls[2][1][0] == expected_wait_msg.format(3)
# assert mock_vvv.mock_calls[3][1][0] == expected_wait_msg.format(4)
# assert mock_vvv.mock_calls[4][1][0] == expected_wait_msg.format(6)
# assert mock_vvv.mock_calls[5][1][0] == expected_wait_msg.format(10)
# assert mock_vvv.mock_calls[6][1][0] == expected_wait_msg.format(15)
# assert mock_vvv.mock_calls[7][1][0] == expected_wait_msg.format(22)
# assert mock_vvv.mock_calls[8][1][0] == expected_wait_msg.format(30)
@pytest.mark.parametrize('api_version, token_type, version, token_ins', [
('v2', None, 'v2.1.13', None),
('v3', 'Bearer', 'v1.0.0', KeycloakToken(auth_url='https://api.test/api/automation-hub/')),
])
def test_get_collection_version_metadata_no_version(api_version, token_type, version, token_ins, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'download_url': 'https://downloadme.com',
'artifact': {
'sha256': 'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f',
},
'namespace': {
'name': 'namespace',
},
'collection': {
'name': 'collection',
},
'version': version,
'metadata': {
'dependencies': {},
}
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_version_metadata('namespace', 'collection', version)
assert isinstance(actual, CollectionVersionMetadata)
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.download_url == u'https://downloadme.com'
assert actual.artifact_sha256 == u'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f'
assert actual.version == version
assert actual.dependencies == {}
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s' \
% (api.api_server, api_version, version)
# v2 calls dont need auth, so no authz header or token_type
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, response', [
('v2', None, None, {
'count': 2,
'next': None,
'previous': None,
'results': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
}),
# TODO: Verify this once Automation Hub is actually out
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), {
'count': 2,
'next': None,
'previous': None,
'data': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
}),
])
def test_get_collection_versions(api_version, token_type, token_ins, response, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps(response))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_versions('namespace', 'collection')
assert actual == [u'1.0.0', u'1.0.1']
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions' % api_version
if token_ins:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, responses', [
('v2', None, None, [
{
'count': 6,
'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2',
'previous': None,
'results': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
},
{
'count': 6,
'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=3',
'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions',
'results': [
{
'version': '1.0.2',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.2',
},
{
'version': '1.0.3',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.3',
},
],
},
{
'count': 6,
'next': None,
'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2',
'results': [
{
'version': '1.0.4',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.4',
},
{
'version': '1.0.5',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.5',
},
],
},
]),
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), [
{
'count': 6,
'links': {
'next': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=2',
'previous': None,
},
'data': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.1',
},
],
},
{
'count': 6,
'links': {
'next': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=3',
'previous': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions',
},
'data': [
{
'version': '1.0.2',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.2',
},
{
'version': '1.0.3',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.3',
},
],
},
{
'count': 6,
'links': {
'next': None,
'previous': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=2',
},
'data': [
{
'version': '1.0.4',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.4',
},
{
'version': '1.0.5',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.5',
},
],
},
]),
])
def test_get_collection_versions_pagination(api_version, token_type, token_ins, responses, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_versions('namespace', 'collection')
assert actual == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5']
assert mock_open.call_count == 3
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions' % api_version
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/?page=2' % api_version
assert mock_open.mock_calls[2][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/?page=3' % api_version
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[2][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('responses', [
[
{
'count': 2,
'results': [{'name': '3.5.1', }, {'name': '3.5.2'}],
'next_link': None,
'next': None,
'previous_link': None,
'previous': None
},
],
[
{
'count': 2,
'results': [{'name': '3.5.1'}],
'next_link': '/api/v1/roles/432/versions/?page=2&page_size=50',
'next': '/roles/432/versions/?page=2&page_size=50',
'previous_link': None,
'previous': None
},
{
'count': 2,
'results': [{'name': '3.5.2'}],
'next_link': None,
'next': None,
'previous_link': '/api/v1/roles/432/versions/?&page_size=50',
'previous': '/roles/432/versions/?page_size=50',
},
]
])
def test_get_role_versions_pagination(monkeypatch, responses):
api = get_test_galaxy_api('https://galaxy.com/api/', 'v1')
mock_open = MagicMock()
mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.fetch_role_related('versions', 432)
assert actual == [{'name': '3.5.1'}, {'name': '3.5.2'}]
assert mock_open.call_count == len(responses)
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page_size=50'
if len(responses) == 2:
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page=2&page_size=50'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,819 |
routeros_command fails if username contains period char
|
##### SUMMARY
When using the RouterOS module and using a username with period in it ".", the command fails.
Changing the username fixes the Problem.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
routeros_command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/lucas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
default unchanged configuration
##### OS / ENVIRONMENT
Manjaro Linux (Arch Linux) Kernel 4.19.88-1-MANJARO
##### STEPS TO REPRODUCE
site.yml
```yaml
- hosts: all
roles:
- { role: routeros-common, tags: 'routeros-common' }
```
inventory:
```
[routeros]
sw05
```
Role:
```yaml
- name: "configure common Mikrotik settings"
routeros_command:
commands:
- /system routerboard print
- /system identity print
```
Working group_vars/routeros.yml:
```yaml
ansible_connection: network_cli
ansible_network_os: routeros
ansible_user: lucas
```
not working group_vars/routeros.yml:
```yaml
ansible_connection: network_cli
ansible_network_os: routeros
ansible_user: lucas.pless
```
I tried adding +cet512w to the username, and using the "-u" flag with the ansible-playbook command, but it makes no difference.
##### EXPECTED RESULTS
I expect this result to be printed:
```
ansible-playbook -i inventory site.yml --limit "sw05"
PLAY [all] ************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************
ok: [sw05]
TASK [routeros-common : configure common Mikrotik settings] ***********************************************************************************************************
ok: [sw05]
TASK [routeros-common : Display resource statistics (routeros)] *******************************************************************************************************
ok: [sw05]
PLAY RECAP ************************************************************************************************************************************************************
sw05 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
Instead i get this error message:
```
ansible-playbook -i inventory site.yml --limit "sw05"
PLAY [all] ************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************
ok: [sw05]
TASK [routeros-common : configure common Mikrotik settings] ***********************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: timeout value 30 seconds reached while trying to send command: b'/system resource print'
fatal: [sw05]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/home/lucas/.ansible/tmp/ansible-local-231793hztvso5/ansible-tmp-1576263789.508171-260022264015288/AnsiballZ_routeros_command.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/lucas/.ansible/tmp/ansible-local-231793hztvso5/ansible-tmp-1576263789.508171-260022264015288/AnsiballZ_routeros_command.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/lucas/.ansible/tmp/ansible-local-231793hztvso5/ansible-tmp-1576263789.508171-260022264015288/AnsiballZ_routeros_command.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.routeros.routeros_command', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.8/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.8/runpy.py\", line 95, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.8/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_routeros_command_payload_djxtu67b/ansible_routeros_command_payload.zip/ansible/modules/network/routeros/routeros_command.py\", line 187, in <module>\n File \"/tmp/ansible_routeros_command_payload_djxtu67b/ansible_routeros_command_payload.zip/ansible/modules/network/routeros/routeros_command.py\", line 157, in main\n File \"/tmp/ansible_routeros_command_payload_djxtu67b/ansible_routeros_command_payload.zip/ansible/module_utils/network/routeros/routeros.py\", line 125, in run_commands\n File \"/tmp/ansible_routeros_command_payload_djxtu67b/ansible_routeros_command_payload.zip/ansible/module_utils/network/routeros/routeros.py\", line 55, in get_connection\n File \"/tmp/ansible_routeros_command_payload_djxtu67b/ansible_routeros_command_payload.zip/ansible/module_utils/network/routeros/routeros.py\", line 69, in get_capabilities\n File \"/tmp/ansible_routeros_command_payload_djxtu67b/ansible_routeros_command_payload.zip/ansible/module_utils/connection.py\", line 185, in __rpc__\nansible.module_utils.connection.ConnectionError: timeout value 30 seconds reached while trying to send command: b'/system resource print'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP ************************************************************************************************************************************************************
sw05 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Of cause the ssh login with both user names is working with public key method.
|
https://github.com/ansible/ansible/issues/65819
|
https://github.com/ansible/ansible/pull/65905
|
b05529c5a30a196f89bef3837523efaa24cb3d9e
|
fff613dd3eaec2fcf1eb09e66feb358476bbb4cd
| 2019-12-13T19:07:55Z |
python
| 2020-01-09T04:04:47Z |
lib/ansible/plugins/terminal/routeros.py
|
#
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import re
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_text, to_bytes
from ansible.plugins.terminal import TerminalBase
from ansible.utils.display import Display
display = Display()
class TerminalModule(TerminalBase):
ansi_re = [
# check ECMA-48 Section 5.4 (Control Sequences)
re.compile(br'(\x1b\[\?1h\x1b=)'),
re.compile(br'((?:\x9b|\x1b\x5b)[\x30-\x3f]*[\x20-\x2f]*[\x40-\x7e])'),
re.compile(br'\x08.')
]
terminal_initial_prompt = [
br'\x1bZ',
]
terminal_initial_answer = b'\x1b/Z'
terminal_stdout_re = [
re.compile(br"\x1b<"),
re.compile(br"\[\w+\@[\w\s\-\.]+\] ?> ?$"),
re.compile(br"Please press \"Enter\" to continue!"),
re.compile(br"Do you want to see the software license\? \[Y\/n\]: ?"),
]
terminal_stderr_re = [
re.compile(br"\nbad command name"),
re.compile(br"\nno such item"),
re.compile(br"\ninvalid value for"),
]
def on_open_shell(self):
prompt = self._get_prompt()
try:
if prompt.strip().endswith(b':'):
self._exec_cli_command(b' ')
if prompt.strip().endswith(b'!'):
self._exec_cli_command(b'\n')
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to bypass license prompt')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,069 |
nxos_vrf purge breaks with empty aggregate
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
VRFs are getting deployed with the aggregate function of nxos_vrf. In case all vrfs are removed from the host_inventory the aggregate param will be empty and the param purge will remove all configured vrf. This procedure breaks due to wrong check of aggregate.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
nxos_vrf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /rnx/config/ansible/facts
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
```
Software
BIOS: version 05.39
NXOS: version 9.3(2)
BIOS compile time: 08/30/2019
NXOS image file is: bootflash:///nxos.9.3.2.bin
NXOS compile time: 11/4/2019 12:00:00 [11/04/2019 22:13:33]
Hardware
cisco Nexus9000 C93180YC-FX Chassis
Intel(R) Xeon(R) CPU D-1528 @ 1.90GHz with 65808216 kB of memory.
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- nxos_vrf:
aggregate:
- { name: firstVRF }
state: present
tags: first
- nxos_vrf:
aggregate:
- {}
purge: true
tags: second
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Create the VRF with tag 'first'. Remove _all_ VRFs with step 'second'
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
First VRF is created, purging breaks
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [sw-xlab-02-0012] *******************************************************************************************************************************************************************
TASK [nxos_vrf] **************************************************************************************************************************************************************************
changed: [sw-xlab-02-0012]
TASK [nxos_vrf] **************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: object of type 'NoneType' has no len()
fatal: [sw-xlab-02-0012]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-1724lhatgxa4/ansible-tmp-1574177989.7869391-103540208340962/AnsiballZ_nxos_vrf.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-1724lhatgxa4/ansible-tmp-1574177989.7869391-103540208340962/AnsiballZ_nxos_vrf.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-1724lhatgxa4/ansible-tmp-1574177989.7869391-103540208340962/AnsiballZ_nxos_vrf.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_vrf', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_nxos_vrf_payload_ezywkz9m/ansible_nxos_vrf_payload.zip/ansible/modules/network/nxos/nxos_vrf.py\", line 516, in <module>\n File \"/tmp/ansible_nxos_vrf_payload_ezywkz9m/ansible_nxos_vrf_payload.zip/ansible/modules/network/nxos/nxos_vrf.py\", line 500, in main\n File \"/tmp/ansible_nxos_vrf_payload_ezywkz9m/ansible_nxos_vrf_payload.zip/ansible/modules/network/nxos/nxos_vrf.py\", line 365, in map_params_to_obj\n File \"/tmp/ansible_nxos_vrf_payload_ezywkz9m/ansible_nxos_vrf_payload.zip/ansible/modules/network/nxos/nxos_vrf.py\", line 349, in validate_vrf\nTypeError: object of type 'NoneType' has no len()\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *******************************************************************************************************************************************************************************
sw-xlab-02-0012 : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65069
|
https://github.com/ansible/ansible/pull/66004
|
f70f4266044c410d1a0a17125d170fcf1953cd10
|
a3d67edfcaa6d04adf9da4119f2443ca35176a03
| 2019-11-19T15:50:11Z |
python
| 2020-01-09T19:43:59Z |
lib/ansible/modules/network/nxos/nxos_vrf.py
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_vrf
extends_documentation_fragment: nxos
version_added: "2.1"
short_description: Manages global VRF configuration.
description:
- This module provides declarative management of VRFs
on CISCO NXOS network devices.
author:
- Jason Edelman (@jedelman8)
- Gabriele Gerbino (@GGabriele)
- Trishna Guha (@trishnaguha)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- Cisco NX-OS creates the default VRF by itself. Therefore,
you're not allowed to use default as I(vrf) name in this module.
- C(vrf) name must be shorter than 32 chars.
- VRF names are not case sensible in NX-OS. Anyway, the name is stored
just like it's inserted by the user and it'll not be changed again
unless the VRF is removed and re-created. i.e. C(vrf=NTC) will create
a VRF named NTC, but running it again with C(vrf=ntc) will not cause
a configuration change.
options:
name:
description:
- Name of VRF to be managed.
required: true
aliases: [vrf]
admin_state:
description:
- Administrative state of the VRF.
default: up
choices: ['up','down']
vni:
description:
- Specify virtual network identifier. Valid values are Integer
or keyword 'default'.
version_added: "2.2"
rd:
description:
- VPN Route Distinguisher (RD). Valid values are a string in
one of the route-distinguisher formats (ASN2:NN, ASN4:NN, or
IPV4:NN); the keyword 'auto', or the keyword 'default'.
version_added: "2.2"
interfaces:
description:
- List of interfaces to check the VRF has been
configured correctly or keyword 'default'.
version_added: 2.5
associated_interfaces:
description:
- This is a intent option and checks the operational state of the for given vrf C(name)
for associated interfaces. If the value in the C(associated_interfaces) does not match with
the operational state of vrf interfaces on device it will result in failure.
version_added: "2.5"
aggregate:
description: List of VRFs definitions.
version_added: 2.5
purge:
description:
- Purge VRFs not defined in the I(aggregate) parameter.
type: bool
default: 'no'
version_added: 2.5
state:
description:
- Manages desired state of the resource.
default: present
choices: ['present','absent']
description:
description:
- Description of the VRF or keyword 'default'.
delay:
description:
- Time in seconds to wait before checking for the operational state on remote
device. This wait is applicable for operational state arguments.
default: 10
'''
EXAMPLES = '''
- name: Ensure ntc VRF exists on switch
nxos_vrf:
name: ntc
description: testing
state: present
- name: Aggregate definition of VRFs
nxos_vrf:
aggregate:
- { name: test1, description: Testing, admin_state: down }
- { name: test2, interfaces: Ethernet1/2 }
- name: Aggregate definitions of VRFs with Purge
nxos_vrf:
aggregate:
- { name: ntc1, description: purge test1 }
- { name: ntc2, description: purge test2 }
state: present
purge: yes
- name: Delete VRFs exist on switch
nxos_vrf:
aggregate:
- { name: ntc1 }
- { name: ntc2 }
state: absent
- name: Assign interfaces to VRF declaratively
nxos_vrf:
name: test1
interfaces:
- Ethernet2/3
- Ethernet2/5
- name: Check interfaces assigned to VRF
nxos_vrf:
name: test1
associated_interfaces:
- Ethernet2/3
- Ethernet2/5
- name: Ensure VRF is tagged with interface Ethernet2/5 only (Removes from Ethernet2/3)
nxos_vrf:
name: test1
interfaces:
- Ethernet2/5
- name: Delete VRF
nxos_vrf:
name: ntc
state: absent
'''
RETURN = '''
commands:
description: commands sent to the device
returned: always
type: list
sample:
- vrf context ntc
- no shutdown
- interface Ethernet1/2
- no switchport
- vrf member test2
'''
import re
import time
from copy import deepcopy
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.nxos.nxos import load_config, run_commands
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec, get_interface_type
from ansible.module_utils.network.common.utils import remove_default_spec
def search_obj_in_list(name, lst):
for o in lst:
if o['name'] == name:
return o
def execute_show_command(command, module):
if 'show run' not in command:
output = 'json'
else:
output = 'text'
cmds = [{
'command': command,
'output': output,
}]
body = run_commands(module, cmds)
return body
def get_existing_vrfs(module):
objs = list()
command = "show vrf all"
try:
body = execute_show_command(command, module)[0]
except IndexError:
return list()
try:
vrf_table = body['TABLE_vrf']['ROW_vrf']
except (TypeError, IndexError, KeyError):
return list()
if isinstance(vrf_table, list):
for vrf in vrf_table:
obj = {}
obj['name'] = vrf['vrf_name']
objs.append(obj)
elif isinstance(vrf_table, dict):
obj = {}
obj['name'] = vrf_table['vrf_name']
objs.append(obj)
return objs
def map_obj_to_commands(updates, module):
commands = list()
want, have = updates
state = module.params['state']
purge = module.params['purge']
args = ('rd', 'description', 'vni')
for w in want:
name = w['name']
admin_state = w['admin_state']
vni = w['vni']
interfaces = w.get('interfaces') or []
state = w['state']
del w['state']
obj_in_have = search_obj_in_list(name, have)
if state == 'absent' and obj_in_have:
commands.append('no vrf context {0}'.format(name))
elif state == 'present':
if not obj_in_have:
commands.append('vrf context {0}'.format(name))
for item in args:
candidate = w.get(item)
if candidate and candidate != 'default':
cmd = item + ' ' + str(candidate)
commands.append(cmd)
if admin_state == 'up':
commands.append('no shutdown')
elif admin_state == 'down':
commands.append('shutdown')
commands.append('exit')
if interfaces and interfaces[0] != 'default':
for i in interfaces:
commands.append('interface {0}'.format(i))
if get_interface_type(i) in ('ethernet', 'portchannel'):
commands.append('no switchport')
commands.append('vrf member {0}'.format(name))
else:
# If vni is already configured on vrf, unconfigure it first.
if vni:
if obj_in_have.get('vni') and vni != obj_in_have.get('vni'):
commands.append('no vni {0}'.format(obj_in_have.get('vni')))
for item in args:
candidate = w.get(item)
if candidate == 'default':
if obj_in_have.get(item):
cmd = 'no ' + item + ' ' + obj_in_have.get(item)
commands.append(cmd)
elif candidate and candidate != obj_in_have.get(item):
cmd = item + ' ' + str(candidate)
commands.append(cmd)
if admin_state and admin_state != obj_in_have.get('admin_state'):
if admin_state == 'up':
commands.append('no shutdown')
elif admin_state == 'down':
commands.append('shutdown')
if commands:
commands.insert(0, 'vrf context {0}'.format(name))
commands.append('exit')
if interfaces and interfaces[0] != 'default':
if not obj_in_have['interfaces']:
for i in interfaces:
commands.append('vrf context {0}'.format(name))
commands.append('exit')
commands.append('interface {0}'.format(i))
if get_interface_type(i) in ('ethernet', 'portchannel'):
commands.append('no switchport')
commands.append('vrf member {0}'.format(name))
elif set(interfaces) != set(obj_in_have['interfaces']):
missing_interfaces = list(set(interfaces) - set(obj_in_have['interfaces']))
for i in missing_interfaces:
commands.append('vrf context {0}'.format(name))
commands.append('exit')
commands.append('interface {0}'.format(i))
if get_interface_type(i) in ('ethernet', 'portchannel'):
commands.append('no switchport')
commands.append('vrf member {0}'.format(name))
superfluous_interfaces = list(set(obj_in_have['interfaces']) - set(interfaces))
for i in superfluous_interfaces:
commands.append('vrf context {0}'.format(name))
commands.append('exit')
commands.append('interface {0}'.format(i))
if get_interface_type(i) in ('ethernet', 'portchannel'):
commands.append('no switchport')
commands.append('no vrf member {0}'.format(name))
elif interfaces and interfaces[0] == 'default':
if obj_in_have['interfaces']:
for i in obj_in_have['interfaces']:
commands.append('vrf context {0}'.format(name))
commands.append('exit')
commands.append('interface {0}'.format(i))
if get_interface_type(i) in ('ethernet', 'portchannel'):
commands.append('no switchport')
commands.append('no vrf member {0}'.format(name))
if purge:
existing = get_existing_vrfs(module)
if existing:
for h in existing:
if h['name'] in ('default', 'management'):
pass
else:
obj_in_want = search_obj_in_list(h['name'], want)
if not obj_in_want:
commands.append('no vrf context {0}'.format(h['name']))
return commands
def validate_vrf(name, module):
if name == 'default':
module.fail_json(msg='cannot use default as name of a VRF')
elif len(name) > 32:
module.fail_json(msg='VRF name exceeded max length of 32', name=name)
else:
return name
def map_params_to_obj(module):
obj = []
aggregate = module.params.get('aggregate')
if aggregate:
for item in aggregate:
for key in item:
if item.get(key) is None:
item[key] = module.params[key]
d = item.copy()
d['name'] = validate_vrf(d['name'], module)
obj.append(d)
else:
obj.append({
'name': validate_vrf(module.params['name'], module),
'description': module.params['description'],
'vni': module.params['vni'],
'rd': module.params['rd'],
'admin_state': module.params['admin_state'],
'state': module.params['state'],
'interfaces': module.params['interfaces'],
'associated_interfaces': module.params['associated_interfaces']
})
return obj
def get_value(arg, config, module):
extra_arg_regex = re.compile(r'(?:{0}\s)(?P<value>.*)$'.format(arg), re.M)
value = ''
if arg in config:
value = extra_arg_regex.search(config).group('value')
return value
def map_config_to_obj(want, element_spec, module):
objs = list()
for w in want:
obj = deepcopy(element_spec)
del obj['delay']
del obj['state']
command = 'show vrf {0}'.format(w['name'])
try:
body = execute_show_command(command, module)[0]
vrf_table = body['TABLE_vrf']['ROW_vrf']
except (TypeError, IndexError):
return list()
name = vrf_table['vrf_name']
obj['name'] = name
obj['admin_state'] = vrf_table['vrf_state'].lower()
command = 'show run all | section vrf.context.{0}'.format(name)
body = execute_show_command(command, module)[0]
extra_params = ['vni', 'rd', 'description']
for param in extra_params:
obj[param] = get_value(param, body, module)
obj['interfaces'] = []
command = 'show vrf {0} interface'.format(name)
try:
body = execute_show_command(command, module)[0]
vrf_int = body['TABLE_if']['ROW_if']
except (TypeError, IndexError):
vrf_int = None
if vrf_int:
if isinstance(vrf_int, list):
for i in vrf_int:
intf = i['if_name']
obj['interfaces'].append(intf)
elif isinstance(vrf_int, dict):
intf = vrf_int['if_name']
obj['interfaces'].append(intf)
objs.append(obj)
return objs
def check_declarative_intent_params(want, module, element_spec, result):
have = None
is_delay = False
for w in want:
if w.get('associated_interfaces') is None:
continue
if result['changed'] and not is_delay:
time.sleep(module.params['delay'])
is_delay = True
if have is None:
have = map_config_to_obj(want, element_spec, module)
for i in w['associated_interfaces']:
obj_in_have = search_obj_in_list(w['name'], have)
if obj_in_have:
interfaces = obj_in_have.get('interfaces')
if interfaces is not None and i not in interfaces:
module.fail_json(msg="Interface %s not configured on vrf %s" % (i, w['name']))
def vrf_error_check(module, commands, responses):
"""Checks for VRF config errors and executes a retry in some circumstances.
"""
pattern = 'ERROR: Deletion of VRF .* in progress'
if re.search(pattern, str(responses)):
# Allow delay/retry for VRF changes
time.sleep(15)
responses = load_config(module, commands, opts={'catch_clierror': True})
if re.search(pattern, str(responses)):
module.fail_json(msg='VRF config (and retry) failure: %s ' % responses)
module.warn('VRF config delayed by VRF deletion - passed on retry')
def main():
""" main entry point for module execution
"""
element_spec = dict(
name=dict(type='str', aliases=['vrf']),
description=dict(type='str'),
vni=dict(type='str'),
rd=dict(type='str'),
admin_state=dict(type='str', default='up', choices=['up', 'down']),
interfaces=dict(type='list'),
associated_interfaces=dict(type='list'),
delay=dict(type='int', default=10),
state=dict(type='str', default='present', choices=['present', 'absent']),
)
aggregate_spec = deepcopy(element_spec)
# remove default in aggregate spec, to handle common arguments
remove_default_spec(aggregate_spec)
argument_spec = dict(
aggregate=dict(type='list', elements='dict', options=aggregate_spec),
purge=dict(type='bool', default=False),
)
argument_spec.update(element_spec)
argument_spec.update(nxos_argument_spec)
required_one_of = [['name', 'aggregate']]
mutually_exclusive = [['name', 'aggregate']]
module = AnsibleModule(argument_spec=argument_spec,
required_one_of=required_one_of,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
warnings = list()
result = {'changed': False}
if warnings:
result['warnings'] = warnings
want = map_params_to_obj(module)
have = map_config_to_obj(want, element_spec, module)
commands = map_obj_to_commands((want, have), module)
result['commands'] = commands
if commands and not module.check_mode:
responses = load_config(module, commands, opts={'catch_clierror': True})
vrf_error_check(module, commands, responses)
result['changed'] = True
check_declarative_intent_params(want, module, element_spec, result)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,271 |
win_user_right module doesn't support none or 'null' as a data type
|
##### SUMMARY
I have a requirement to ensure that certain Windows users right polices are set to to none.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_user_right
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The win_user_right module is the module that should be used in this case. However, win_user_right doesn't support this specific use case of setting a policy to 'null'. I believe the crux of the problem is when a policy is set to 'null' `$lsa_helper.EnumerateAccountsWithUserRight($name)` will fail to find the permission (i.e. policy).
I would suggest an enhancement that would support the following playbook.
```yaml
win_user_right:
name: SeDenyRemoteInteractiveLogonRight
action: none
```
Notice in the above playbook, I'm suggesting adding another action of type `none`. If this action were specified, then `users` shall not be required. Obviously, the TRY, CATCH block around `$lsa_helper.EnumerateAccountsWithUserRight($name)` would need to be adjusted to account for the failure to find the permission (i.e. name) specified.
I might have time to get to this if no one else picks it up before I can.
|
https://github.com/ansible/ansible/issues/66271
|
https://github.com/ansible/ansible/pull/66315
|
891c759832d48c3a30f59fedadf6e7016167e897
|
93bfa4b07ad30cce58a30ffda5db866895aad075
| 2020-01-08T15:33:32Z |
python
| 2020-01-09T20:33:00Z |
lib/ansible/modules/windows/win_user_right.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_user_right
version_added: '2.4'
short_description: Manage Windows User Rights
description:
- Add, remove or set User Rights for a group or users or groups.
- You can set user rights for both local and domain accounts.
options:
name:
description:
- The name of the User Right as shown by the C(Constant Name) value from
U(https://technet.microsoft.com/en-us/library/dd349804.aspx).
- The module will return an error if the right is invalid.
type: str
required: yes
users:
description:
- A list of users or groups to add/remove on the User Right.
- These can be in the form DOMAIN\user-group, [email protected] for
domain users/groups.
- For local users/groups it can be in the form user-group, .\user-group,
SERVERNAME\user-group where SERVERNAME is the name of the remote server.
- You can also add special local accounts like SYSTEM and others.
type: list
required: yes
action:
description:
- C(add) will add the users/groups to the existing right.
- C(remove) will remove the users/groups from the existing right.
- C(set) will replace the users/groups of the existing right.
type: str
default: set
choices: [ add, remove, set ]
notes:
- If the server is domain joined this module can change a right but if a GPO
governs this right then the changes won't last.
seealso:
- module: win_group
- module: win_group_membership
- module: win_user
author:
- Jordan Borean (@jborean93)
'''
EXAMPLES = r'''
---
- name: Replace the entries of Deny log on locally
win_user_right:
name: SeDenyInteractiveLogonRight
users:
- Guest
- Users
action: set
- name: Add account to Log on as a service
win_user_right:
name: SeServiceLogonRight
users:
- .\Administrator
- '{{ansible_hostname}}\local-user'
action: add
- name: Remove accounts who can create Symbolic links
win_user_right:
name: SeCreateSymbolicLinkPrivilege
users:
- SYSTEM
- Administrators
- DOMAIN\User
- [email protected]
action: remove
'''
RETURN = r'''
added:
description: A list of accounts that were added to the right, this is empty
if no accounts were added.
returned: success
type: list
sample: ["NT AUTHORITY\\SYSTEM", "DOMAIN\\User"]
removed:
description: A list of accounts that were removed from the right, this is
empty if no accounts were removed.
returned: success
type: list
sample: ["SERVERNAME\\Administrator", "BUILTIN\\Administrators"]
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,656 |
Honor explicitly marking a parameter as not sensitive.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The way that the `no_log` parameter is currently implemented in `AnsibleModule._log_invocation` is overly simplistic, and appears to violate the principle of least astonishment due to its overly brief documentation failing to mention the corner cases for which it appears to have been written. As `no_log` is currently implemented, it will:
* Not log a parameter value if `no_log` is truthy.
* Not log a parameter value and complain if the parameter name matches `PASSWORD_MATCH` and is neither a `bool` or enum.
* Otherwise, log the parameter.
On the surface (as a module writer), the documentation (see: [Protecting sensitive data with `no_log`](https://docs.ansible.com/ansible/devel/reference_appendices/logging.html?highlight=no_log#protecting-sensitive-data-with-no-log)) states that by passing `no_log=True` you are explicitly marking a parameter as sensitive. This makes sense. It would then follow that by passing `no_log=False` you are explicitly marking a parameter as not sensitive. This is where the current implementation fails. There is currently no way (that I can see) for a module author that has properly developed and tested a module to avoid a warning regarding the `no_log` parameter if the parameter matches `PASSWORD_MATCH` (at least, not that I can see in the documentation or HEAD.)
I certainly understand the need to protect users from the whims of careless third party module authors (some of which may have a tenuous grasp on Python at best), but for those of us that write and maintain our own modules professionally it seems like a rather strange choice to provide no avenue for a knowledgeable module author to say, "no, pass_interval really is not a password, and no, there is no other name that would work without causing excessive end-user confusion".
The feature request is basically: can `AnsibleModule._log_invocation` be adjusted to actually honor `no_log=False` as the opposite of `no_log=True`, which it currently does not. I do not believe it would be difficult to implement while maintaining the current behavior where a module author that does not specify either way would be caught by `PASSWORD_MATCH`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/module_utils/basic.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This change would not be user facing (typically) unless you're a module author. It is needed because, at the current time, there is no way to mark a parameter as explicitly not sensitive (as `no_log=False` is not being honored effectively) and avoid erroneous messages in the logs. Some authors have been getting around this by choosing more obscure names that diverge from current documentation for their parameters, but you're basically hiding from the problem at that point, and confusing end-users.
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
Duplicate of a proposed solution to https://github.com/ansible/ansible/issues/49465
|
https://github.com/ansible/ansible/issues/64656
|
https://github.com/ansible/ansible/pull/64733
|
93bfa4b07ad30cce58a30ffda5db866895aad075
|
3ca4580cb4e2a24597c6c5108bf76bbcd48069f8
| 2019-11-10T15:28:11Z |
python
| 2020-01-09T21:47:57Z |
changelogs/fragments/64733-make-no_log-false-override-no_log-warnings.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,656 |
Honor explicitly marking a parameter as not sensitive.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The way that the `no_log` parameter is currently implemented in `AnsibleModule._log_invocation` is overly simplistic, and appears to violate the principle of least astonishment due to its overly brief documentation failing to mention the corner cases for which it appears to have been written. As `no_log` is currently implemented, it will:
* Not log a parameter value if `no_log` is truthy.
* Not log a parameter value and complain if the parameter name matches `PASSWORD_MATCH` and is neither a `bool` or enum.
* Otherwise, log the parameter.
On the surface (as a module writer), the documentation (see: [Protecting sensitive data with `no_log`](https://docs.ansible.com/ansible/devel/reference_appendices/logging.html?highlight=no_log#protecting-sensitive-data-with-no-log)) states that by passing `no_log=True` you are explicitly marking a parameter as sensitive. This makes sense. It would then follow that by passing `no_log=False` you are explicitly marking a parameter as not sensitive. This is where the current implementation fails. There is currently no way (that I can see) for a module author that has properly developed and tested a module to avoid a warning regarding the `no_log` parameter if the parameter matches `PASSWORD_MATCH` (at least, not that I can see in the documentation or HEAD.)
I certainly understand the need to protect users from the whims of careless third party module authors (some of which may have a tenuous grasp on Python at best), but for those of us that write and maintain our own modules professionally it seems like a rather strange choice to provide no avenue for a knowledgeable module author to say, "no, pass_interval really is not a password, and no, there is no other name that would work without causing excessive end-user confusion".
The feature request is basically: can `AnsibleModule._log_invocation` be adjusted to actually honor `no_log=False` as the opposite of `no_log=True`, which it currently does not. I do not believe it would be difficult to implement while maintaining the current behavior where a module author that does not specify either way would be caught by `PASSWORD_MATCH`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/module_utils/basic.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This change would not be user facing (typically) unless you're a module author. It is needed because, at the current time, there is no way to mark a parameter as explicitly not sensitive (as `no_log=False` is not being honored effectively) and avoid erroneous messages in the logs. Some authors have been getting around this by choosing more obscure names that diverge from current documentation for their parameters, but you're basically hiding from the problem at that point, and confusing end-users.
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
Duplicate of a proposed solution to https://github.com/ansible/ansible/issues/49465
|
https://github.com/ansible/ansible/issues/64656
|
https://github.com/ansible/ansible/pull/64733
|
93bfa4b07ad30cce58a30ffda5db866895aad075
|
3ca4580cb4e2a24597c6c5108bf76bbcd48069f8
| 2019-11-10T15:28:11Z |
python
| 2020-01-09T21:47:57Z |
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
|
.. _developing_modules_documenting:
.. _module_documenting:
*******************************
Module format and documentation
*******************************
If you want to contribute your module to Ansible, you must write your module in Python and follow the standard format described below. (Unless you're writing a Windows module, in which case the :ref:`Windows guidelines <developing_modules_general_windows>` apply.) In addition to following this format, you should review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request.
Every Ansible module written in Python must begin with seven standard sections in a particular order, followed by the code. The sections in order are:
.. contents::
:depth: 1
:local:
.. note:: Why don't the imports go first?
Keen Python programmers may notice that contrary to PEP 8's advice we don't put ``imports`` at the top of the file. This is because the ``ANSIBLE_METADATA`` through ``RETURN`` sections are not used by the module code itself; they are essentially extra docstrings for the file. The imports are placed after these special variables for the same reason as PEP 8 puts the imports after the introductory comments and docstrings. This keeps the active parts of the code together and the pieces which are purely informational apart. The decision to exclude E402 is based on readability (which is what PEP 8 is about). Documentation strings in a module are much more similar to module level docstrings, than code, and are never utilized by the module itself. Placing the imports below this documentation and closer to the code, consolidates and groups all related code in a congruent manner to improve readability, debugging and understanding.
.. warning:: **Copy old modules with care!**
Some older modules in Ansible Core have ``imports`` at the bottom of the file, ``Copyright`` notices with the full GPL prefix, and/or ``ANSIBLE_METADATA`` fields in the wrong order. These are legacy files that need updating - do not copy them into new modules. Over time we're updating and correcting older modules. Please follow the guidelines on this page!
.. _shebang:
Python shebang & UTF-8 coding
===============================
Every Ansible module must begin with ``#!/usr/bin/python`` - this "shebang" allows ``ansible_python_interpreter`` to work.
This is immediately followed by ``# -*- coding: utf-8 -*-`` to clarify that the file is UTF-8 encoded.
.. _copyright:
Copyright and license
=====================
After the shebang and UTF-8 coding, there should be a `copyright line <https://www.gnu.org/licenses/gpl-howto.en.html>`_ with the original copyright holder and a license declaration. The license declaration should be ONLY one line, not the full GPL prefix.:
.. code-block:: python
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Terry Jones <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
Major additions to the module (for instance, rewrites) may add additional copyright lines. Any legal review will include the source control history, so an exhaustive copyright header is not necessary. When adding a second copyright line for a significant feature or rewrite, add the newer line above the older one:
.. code-block:: python
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, [New Contributor(s)]
# Copyright: (c) 2015, [Original Contributor(s)]
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
.. _ansible_metadata_block:
ANSIBLE_METADATA block
======================
After the shebang, the UTF-8 coding, the copyright, and the license, your module file should contain an ``ANSIBLE_METADATA`` section. This section provides information about the module for use by other tools. For new modules, the following block can be simply added into your module:
.. code-block:: python
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
.. warning::
* ``metadata_version`` is the version of the ``ANSIBLE_METADATA`` schema, *not* the version of the module.
* Promoting a module's ``status`` or ``supported_by`` status should only be done by members of the Ansible Core Team.
Ansible metadata fields
-----------------------
:metadata_version: An "X.Y" formatted string. X and Y are integers which
define the metadata format version. Modules shipped with Ansible are
tied to an Ansible release, so we will only ship with a single version
of the metadata. We'll increment Y if we add fields or legal values
to an existing field. We'll increment X if we remove fields or values
or change the type or meaning of a field.
Current metadata_version is "1.1"
:supported_by: Who supports the module.
Default value is ``community``. For information on what the support level values entail, please see
:ref:`Modules Support <modules_support>`. Values are:
* core
* network
* certified
* community
* curated (*deprecated value - modules in this category should be core or
certified instead*)
:status: List of strings describing how stable the module is likely to be. See also :ref:`module_lifecycle`.
The default value is a single element list ["preview"]. The following strings are valid
statuses and have the following meanings:
:stableinterface: The module's options (the parameters or arguments it accepts) are stable. Every effort will be made not to remove options or change
their meaning. **Not** a rating of the module's code quality.
:preview: The module is in tech preview. It may be
unstable, the options may change, or it may require libraries or
web services that are themselves subject to incompatible changes.
:deprecated: The module is deprecated and will be removed in a future release.
:removed: The module is not present in the release. A stub is
kept so that documentation can be built. The documentation helps
users port from the removed module to new modules.
.. _documentation_block:
DOCUMENTATION block
===================
After the shebang, the UTF-8 coding, the copyright line, the license, and the ``ANSIBLE_METADATA`` section comes the ``DOCUMENTATION`` block. Ansible's online module documentation is generated from the ``DOCUMENTATION`` blocks in each module's source code. The ``DOCUMENTATION`` block must be valid YAML. You may find it easier to start writing your ``DOCUMENTATION`` string in an :ref:`editor with YAML syntax highlighting <other_tools_and_programs>` before you include it in your Python file. You can start by copying our `example documentation string <https://github.com/ansible/ansible/blob/devel/examples/DOCUMENTATION.yml>`_ into your module file and modifying it. If you run into syntax issues in your YAML, you can validate it on the `YAML Lint <http://www.yamllint.com/>`_ website.
Module documentation should briefly and accurately define what each module and option does, and how it works with others in the underlying system. Documentation should be written for broad audience--readable both by experts and non-experts.
* Descriptions should always start with a capital letter and end with a full stop. Consistency always helps.
* Verify that arguments in doc and module spec dict are identical.
* For password / secret arguments no_log=True should be set.
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* If your module allows ``check_mode``, reflect this fact in the documentation.
Each documentation field is described below. Before committing your module documentation, please test it at the command line and as HTML:
* As long as your module file is :ref:`available locally <local_modules>`, you can use ``ansible-doc -t module my_module_name`` to view your module documentation at the command line. Any parsing errors will be obvious - you can view details by adding ``-vvv`` to the command.
* You should also :ref:`test the HTML output <testing_module_documentation>` of your module documentation.
Documentation fields
--------------------
All fields in the ``DOCUMENTATION`` block are lower-case. All fields are required unless specified otherwise:
:module:
* The name of the module.
* Must be the same as the filename, without the ``.py`` extension.
:short_description:
* A short description which is displayed on the :ref:`all_modules` page and ``ansible-doc -l``.
* The ``short_description`` is displayed by ``ansible-doc -l`` without any category grouping,
so it needs enough detail to explain the module's purpose without the context of the directory structure in which it lives.
* Unlike ``description:``, ``short_description`` should not have a trailing period/full stop.
:description:
* A detailed description (generally two or more sentences).
* Must be written in full sentences, i.e. with capital letters and periods/full stops.
* Shouldn't mention the module name.
* Make use of multiple entries rather than using one long paragraph.
* Don't quote complete values unless it is required by YAML.
:version_added:
* The version of Ansible when the module was added.
* This is a string, and not a float, i.e. ``version_added: '2.1'``
:author:
* Name of the module author in the form ``First Last (@GitHubID)``.
* Use a multi-line list if there is more than one author.
* Don't use quotes as it should not be required by YAML.
:deprecated:
* Marks modules that will be removed in future releases. See also :ref:`module_lifecycle`.
:options:
* Options are often called `parameters` or `arguments`. Because the documentation field is called `options`, we will use that term.
* If the module has no options (for example, it's a ``_facts`` module), all you need is one line: ``options: {}``.
* If your module has options (in other words, accepts arguments), each option should be documented thoroughly. For each module option, include:
:option-name:
* Declarative operation (not CRUD), to focus on the final state, for example `online:`, rather than `is_online:`.
* The name of the option should be consistent with the rest of the module, as well as other modules in the same category.
* When in doubt, look for other modules to find option names that are used for the same purpose, we like to offer consistency to our users.
:description:
* Detailed explanation of what this option does. It should be written in full sentences.
* The first entry is a description of the option itself; subsequent entries detail its use, dependencies, or format of possible values.
* Should not list the possible values (that's what ``choices:`` is for, though it should explain what the values do if they aren't obvious).
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* Mutually exclusive options must be documented as the final sentence on each of the options.
:required:
* Only needed if ``true``.
* If missing, we assume the option is not required.
:default:
* If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing).
* Ensure that the default value in the docs matches the default value in the code.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible:
(such as true/false or yes/no). Choose the one that reads better in the context of the option.
:choices:
* List of option values.
* Should be absent if empty.
:type:
* Specifies the data type that option accepts, must match the ``argspec``.
* If an argument is ``type='bool'``, this field should be set to ``type: bool`` and no ``choices`` should be specified.
* If an argument is ``type='list'``, ``elements`` should be specified.
:elements:
* Specifies the data type for list elements in case ``type='list'``.
:aliases:
* List of optional name aliases.
* Generally not needed.
:version_added:
* Only needed if this option was extended after initial Ansible release, i.e. this is greater than the top level `version_added` field.
* This is a string, and not a float, i.e. ``version_added: '2.3'``.
:suboptions:
* If this option takes a dict or list of dicts, you can define the structure here.
* See :ref:`azure_rm_securitygroup_module`, :ref:`azure_rm_azurefirewall_module` and :ref:`os_ironic_node_module` for examples.
:requirements:
* List of requirements (if applicable).
* Include minimum versions.
:seealso:
* A list of references to other modules, documentation or Internet resources
* A reference can be one of the following formats:
.. code-block:: yaml+jinja
seealso:
# Reference by module name
- module: aci_tenant
# Reference by module name, including description
- module: aci_tenant
description: ACI module to create tenants on a Cisco ACI fabric.
# Reference by rST documentation anchor
- ref: aci_guide
description: Detailed information on how to manage your ACI infrastructure using Ansible.
# Reference by Internet resource
- name: APIC Management Information Model reference
description: Complete reference of the APIC object model.
link: https://developer.cisco.com/docs/apic-mim-ref/
:notes:
* Details of any important information that doesn't fit in one of the above sections.
* For example, whether ``check_mode`` is or is not supported.
Linking within module documentation
-----------------------------------
You can link from your module documentation to other module docs, other resources on docs.ansible.com, and resources elsewhere on the internet. The correct formats for these links are:
* ``L()`` for Links with a heading. For example: ``See L(IOS Platform Options guide,../network/user_guide/platform_ios.html).``
* ``U()`` for URLs. For example: ``See U(https://www.ansible.com/products/tower) for an overview.``
* ``I()`` for option names. For example: ``Required if I(state=present).``
* ``C()`` for files and option values. For example: ``If not set the environment variable C(ACME_PASSWORD) will be used.``
* ``M()`` for module names. For example: ``See also M(win_copy) or M(win_template).``
.. note::
For modules in a collection, you can only use ``L()`` and ``M()`` for content within that collection. Use ``U()`` to refer to content in a different collection.
.. note::
- To refer a group of modules, use ``C(..)``, e.g. ``Refer to the C(win_*) modules.``
- Because it stands out better, using ``seealso`` is preferred for general references over the use of notes or adding links to the description.
.. _module_docs_fragments:
Documentation fragments
-----------------------
If you're writing multiple related modules, they may share common documentation, such as authentication details, file mode settings, ``notes:`` or ``seealso:`` entries. Rather than duplicate that information in each module's ``DOCUMENTATION`` block, you can save it once as a doc_fragment plugin and use it in each module's documentation. In Ansible, shared documentation fragments are contained in a ``ModuleDocFragment`` class in `lib/ansible/plugins/doc_fragments/ <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/doc_fragments>`_. To include a documentation fragment, add ``extends_documentation_fragment: FRAGMENT_NAME`` in your module's documentation.
Modules should only use items from a doc fragment if the module will implement all of the interface documented there in a manner that behaves the same as the existing modules which import that fragment. The goal is that items imported from the doc fragment will behave identically when used in another module that imports the doc fragment.
By default, only the ``DOCUMENTATION`` property from a doc fragment is inserted into the module documentation. It is possible to define additional properties in the doc fragment in order to import only certain parts of a doc fragment or mix and match as appropriate. If a property is defined in both the doc fragment and the module, the module value overrides the doc fragment.
Here is an example doc fragment named ``example_fragment.py``:
.. code-block:: python
class ModuleDocFragment(object):
# Standard documentation
DOCUMENTATION = r'''
options:
# options here
'''
# Additional section
OTHER = r'''
options:
# other options here
'''
To insert the contents of ``OTHER`` in a module:
.. code-block:: yaml+jinja
extends_documentation_fragment: example_fragment.other
Or use both :
.. code-block:: yaml+jinja
extends_documentation_fragment:
- example_fragment
- example_fragment.other
.. _note:
* Prior to Ansible 2.8, documentation fragments were kept in ``lib/ansible/utils/module_docs_fragments``.
.. versionadded:: 2.8
Since Ansible 2.8, you can have user-supplied doc_fragments by using a ``doc_fragments`` directory adjacent to play or role, just like any other plugin.
For example, all AWS modules should include:
.. code-block:: yaml+jinja
extends_documentation_fragment:
- aws
- ec2
.. _examples_block:
EXAMPLES block
==============
After the shebang, the UTF-8 coding, the copyright line, the license, the ``ANSIBLE_METADATA`` section, and the ``DOCUMENTATION`` block comes the ``EXAMPLES`` block. Here you show users how your module works with real-world examples in multi-line plain-text YAML format. The best examples are ready for the user to copy and paste into a playbook. Review and update your examples with every change to your module.
Per playbook best practices, each example should include a ``name:`` line::
EXAMPLES = r'''
- name: Ensure foo is installed
modulename:
name: foo
state: present
'''
The ``name:`` line should be capitalized and not include a trailing dot.
If your examples use boolean options, use yes/no values. Since the documentation generates boolean values as yes/no, having the examples use these values as well makes the module documentation more consistent.
If your module returns facts that are often needed, an example of how to use them can be helpful.
.. _return_block:
RETURN block
============
After the shebang, the UTF-8 coding, the copyright line, the license, the ``ANSIBLE_METADATA`` section, ``DOCUMENTATION`` and ``EXAMPLES`` blocks comes the ``RETURN`` block. This section documents the information the module returns for use by other modules.
If your module doesn't return anything (apart from the standard returns), this section of your module should read: ``RETURN = r''' # '''``
Otherwise, for each value returned, provide the following fields. All fields are required unless specified otherwise.
:return name:
Name of the returned field.
:description:
Detailed description of what this value represents. Capitalized and with trailing dot.
:returned:
When this value is returned, such as ``always``, ``changed`` or ``success``. This is a string and can contain any human-readable content.
:type:
Data type.
:elements:
If ``type='list'``, specifies the data type of the list's elements.
:sample:
One or more examples.
:version_added:
Only needed if this return was extended after initial Ansible release, i.e. this is greater than the top level `version_added` field.
This is a string, and not a float, i.e. ``version_added: '2.3'``.
:contains:
Optional. To describe nested return values, set ``type: complex``, ``type: dict``, or ``type: list``/``elements: dict`` and repeat the elements above for each sub-field.
Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field::
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
'''
RETURN = r'''
packages:
description: Information about package requirements
returned: success
type: complex
contains:
missing:
description: Packages that are missing from the system
returned: success
type: list
sample:
- libmysqlclient-dev
- libxml2-dev
badversion:
description: Packages that are installed but at bad versions.
returned: success
type: list
sample:
- package: libxml2-dev
version: 2.9.4+dfsg1-2
constraint: ">= 3.0"
'''
.. _python_imports:
Python imports
==============
After the shebang, the UTF-8 coding, the copyright line, the license, and the sections for ``ANSIBLE_METADATA``, ``DOCUMENTATION``, ``EXAMPLES``, and ``RETURN``, you can finally add the python imports. All modules must use Python imports in the form:
.. code-block:: python
from module_utils.basic import AnsibleModule
The use of "wildcard" imports such as ``from module_utils.basic import *`` is no longer allowed.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,656 |
Honor explicitly marking a parameter as not sensitive.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The way that the `no_log` parameter is currently implemented in `AnsibleModule._log_invocation` is overly simplistic, and appears to violate the principle of least astonishment due to its overly brief documentation failing to mention the corner cases for which it appears to have been written. As `no_log` is currently implemented, it will:
* Not log a parameter value if `no_log` is truthy.
* Not log a parameter value and complain if the parameter name matches `PASSWORD_MATCH` and is neither a `bool` or enum.
* Otherwise, log the parameter.
On the surface (as a module writer), the documentation (see: [Protecting sensitive data with `no_log`](https://docs.ansible.com/ansible/devel/reference_appendices/logging.html?highlight=no_log#protecting-sensitive-data-with-no-log)) states that by passing `no_log=True` you are explicitly marking a parameter as sensitive. This makes sense. It would then follow that by passing `no_log=False` you are explicitly marking a parameter as not sensitive. This is where the current implementation fails. There is currently no way (that I can see) for a module author that has properly developed and tested a module to avoid a warning regarding the `no_log` parameter if the parameter matches `PASSWORD_MATCH` (at least, not that I can see in the documentation or HEAD.)
I certainly understand the need to protect users from the whims of careless third party module authors (some of which may have a tenuous grasp on Python at best), but for those of us that write and maintain our own modules professionally it seems like a rather strange choice to provide no avenue for a knowledgeable module author to say, "no, pass_interval really is not a password, and no, there is no other name that would work without causing excessive end-user confusion".
The feature request is basically: can `AnsibleModule._log_invocation` be adjusted to actually honor `no_log=False` as the opposite of `no_log=True`, which it currently does not. I do not believe it would be difficult to implement while maintaining the current behavior where a module author that does not specify either way would be caught by `PASSWORD_MATCH`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/module_utils/basic.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This change would not be user facing (typically) unless you're a module author. It is needed because, at the current time, there is no way to mark a parameter as explicitly not sensitive (as `no_log=False` is not being honored effectively) and avoid erroneous messages in the logs. Some authors have been getting around this by choosing more obscure names that diverge from current documentation for their parameters, but you're basically hiding from the problem at that point, and confusing end-users.
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
Duplicate of a proposed solution to https://github.com/ansible/ansible/issues/49465
|
https://github.com/ansible/ansible/issues/64656
|
https://github.com/ansible/ansible/pull/64733
|
93bfa4b07ad30cce58a30ffda5db866895aad075
|
3ca4580cb4e2a24597c6c5108bf76bbcd48069f8
| 2019-11-10T15:28:11Z |
python
| 2020-01-09T21:47:57Z |
docs/docsite/rst/dev_guide/developing_program_flow_modules.rst
|
.. _flow_modules:
.. _developing_program_flow_modules:
***************************
Ansible module architecture
***************************
If you're working on Ansible's Core code, writing an Ansible module, or developing an action plugin, this deep dive helps you understand how Ansible's program flow executes. If you're just using Ansible Modules in playbooks, you can skip this section.
.. contents::
:local:
.. _flow_types_of_modules:
Types of modules
================
Ansible supports several different types of modules in its code base. Some of
these are for backwards compatibility and others are to enable flexibility.
.. _flow_action_plugins:
Action plugins
--------------
Action plugins look like modules to anyone writing a playbook. Usage documentation for most action plugins lives inside a module of the same name. Some action plugins do all the work, with the module providing only documentation. Some action plugins execute modules. The ``normal`` action plugin executes modules that don't have special action plugins. Action plugins always execute on the controller.
Some action plugins do all their work on the controller. For
example, the :ref:`debug <debug_module>` action plugin (which prints text for
the user to see) and the :ref:`assert <assert_module>` action plugin (which
tests whether values in a playbook satisfy certain criteria) execute entirely on the controller.
Most action plugins set up some values on the controller, then invoke an
actual module on the managed node that does something with these values. For example, the :ref:`template <template_module>` action plugin takes values from
the user to construct a file in a temporary location on the controller using
variables from the playbook environment. It then transfers the temporary file
to a temporary file on the remote system. After that, it invokes the
:ref:`copy module <copy_module>` which operates on the remote system to move the file
into its final location, sets file permissions, and so on.
.. _flow_new_style_modules:
New-style modules
-----------------
All of the modules that ship with Ansible fall into this category. While you can write modules in any language, all official modules (shipped with Ansible) use either Python or PowerShell.
New-style modules have the arguments to the module embedded inside of them in
some manner. Old-style modules must copy a separate file over to the
managed node, which is less efficient as it requires two over-the-wire
connections instead of only one.
.. _flow_python_modules:
Python
^^^^^^
New-style Python modules use the :ref:`Ansiballz` framework for constructing
modules. These modules use imports from :code:`ansible.module_utils` to pull in
boilerplate module code, such as argument parsing, formatting of return
values as :term:`JSON`, and various file operations.
.. note:: In Ansible, up to version 2.0.x, the official Python modules used the
:ref:`module_replacer` framework. For module authors, :ref:`Ansiballz` is
largely a superset of :ref:`module_replacer` functionality, so you usually
do not need to know about one versus the other.
.. _flow_powershell_modules:
PowerShell
^^^^^^^^^^
New-style PowerShell modules use the :ref:`module_replacer` framework for
constructing modules. These modules get a library of PowerShell code embedded
in them before being sent to the managed node.
.. _flow_jsonargs_modules:
JSONARGS modules
----------------
These modules are scripts that include the string
``<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>`` in their body.
This string is replaced with the JSON-formatted argument string. These modules typically set a variable to that value like this:
.. code-block:: python
json_arguments = """<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"""
Which is expanded as:
.. code-block:: python
json_arguments = """{"param1": "test's quotes", "param2": "\"To be or not to be\" - Hamlet"}"""
.. note:: Ansible outputs a :term:`JSON` string with bare quotes. Double quotes are
used to quote string values, double quotes inside of string values are
backslash escaped, and single quotes may appear unescaped inside of
a string value. To use JSONARGS, your scripting language must have a way
to handle this type of string. The example uses Python's triple quoted
strings to do this. Other scripting languages may have a similar quote
character that won't be confused by any quotes in the JSON or it may
allow you to define your own start-of-quote and end-of-quote characters.
If the language doesn't give you any of these then you'll need to write
a :ref:`non-native JSON module <flow_want_json_modules>` or
:ref:`Old-style module <flow_old_style_modules>` instead.
These modules typically parse the contents of ``json_arguments`` using a JSON
library and then use them as native variables throughout the code.
.. _flow_want_json_modules:
Non-native want JSON modules
----------------------------
If a module has the string ``WANT_JSON`` in it anywhere, Ansible treats
it as a non-native module that accepts a filename as its only command line
parameter. The filename is for a temporary file containing a :term:`JSON`
string containing the module's parameters. The module needs to open the file,
read and parse the parameters, operate on the data, and print its return data
as a JSON encoded dictionary to stdout before exiting.
These types of modules are self-contained entities. As of Ansible 2.1, Ansible
only modifies them to change a shebang line if present.
.. seealso:: Examples of Non-native modules written in ruby are in the `Ansible
for Rubyists <https://github.com/ansible/ansible-for-rubyists>`_ repository.
.. _flow_binary_modules:
Binary modules
--------------
From Ansible 2.2 onwards, modules may also be small binary programs. Ansible
doesn't perform any magic to make these portable to different systems so they
may be specific to the system on which they were compiled or require other
binary runtime dependencies. Despite these drawbacks, you may have
to compile a custom module against a specific binary
library if that's the only way to get access to certain resources.
Binary modules take their arguments and return data to Ansible in the same
way as :ref:`want JSON modules <flow_want_json_modules>`.
.. seealso:: One example of a `binary module
<https://github.com/ansible/ansible/blob/devel/test/integration/targets/binary_modules/library/helloworld.go>`_
written in go.
.. _flow_old_style_modules:
Old-style modules
-----------------
Old-style modules are similar to
:ref:`want JSON modules <flow_want_json_modules>`, except that the file that
they take contains ``key=value`` pairs for their parameters instead of
:term:`JSON`. Ansible decides that a module is old-style when it doesn't have
any of the markers that would show that it is one of the other types.
.. _flow_how_modules_are_executed:
How modules are executed
========================
When a user uses :program:`ansible` or :program:`ansible-playbook`, they
specify a task to execute. The task is usually the name of a module along
with several parameters to be passed to the module. Ansible takes these
values and processes them in various ways before they are finally executed on
the remote machine.
.. _flow_executor_task_executor:
Executor/task_executor
----------------------
The TaskExecutor receives the module name and parameters that were parsed from
the :term:`playbook <playbooks>` (or from the command line in the case of
:command:`/usr/bin/ansible`). It uses the name to decide whether it's looking
at a module or an :ref:`Action Plugin <flow_action_plugins>`. If it's
a module, it loads the :ref:`Normal Action Plugin <flow_normal_action_plugin>`
and passes the name, variables, and other information about the task and play
to that Action Plugin for further processing.
.. _flow_normal_action_plugin:
The ``normal`` action plugin
----------------------------
The ``normal`` action plugin executes the module on the remote host. It is
the primary coordinator of much of the work to actually execute the module on
the managed machine.
* It loads the appropriate connection plugin for the task, which then transfers
or executes as needed to create a connection to that host.
* It adds any internal Ansible properties to the module's parameters (for
instance, the ones that pass along ``no_log`` to the module).
* It works with other plugins (connection, shell, become, other action plugins)
to create any temporary files on the remote machine and
cleans up afterwards.
* It pushes the module and module parameters to the
remote host, although the :ref:`module_common <flow_executor_module_common>`
code described in the next section decides which format
those will take.
* It handles any special cases regarding modules (for instance, async
execution, or complications around Windows modules that must have the same names as Python modules, so that internal calling of modules from other Action Plugins work.)
Much of this functionality comes from the `BaseAction` class,
which lives in :file:`plugins/action/__init__.py`. It uses the
``Connection`` and ``Shell`` objects to do its work.
.. note::
When :term:`tasks <tasks>` are run with the ``async:`` parameter, Ansible
uses the ``async`` Action Plugin instead of the ``normal`` Action Plugin
to invoke it. That program flow is currently not documented. Read the
source for information on how that works.
.. _flow_executor_module_common:
Executor/module_common.py
-------------------------
Code in :file:`executor/module_common.py` assembles the module
to be shipped to the managed node. The module is first read in, then examined
to determine its type:
* :ref:`PowerShell <flow_powershell_modules>` and :ref:`JSON-args modules <flow_jsonargs_modules>` are passed through :ref:`Module Replacer <module_replacer>`.
* New-style :ref:`Python modules <flow_python_modules>` are assembled by :ref:`Ansiballz`.
* :ref:`Non-native-want-JSON <flow_want_json_modules>`, :ref:`Binary modules <flow_binary_modules>`, and :ref:`Old-Style modules <flow_old_style_modules>` aren't touched by either of these and pass through unchanged.
After the assembling step, one final
modification is made to all modules that have a shebang line. Ansible checks
whether the interpreter in the shebang line has a specific path configured via
an ``ansible_$X_interpreter`` inventory variable. If it does, Ansible
substitutes that path for the interpreter path given in the module. After
this, Ansible returns the complete module data and the module type to the
:ref:`Normal Action <flow_normal_action_plugin>` which continues execution of
the module.
Assembler frameworks
--------------------
Ansible supports two assembler frameworks: Ansiballz and the older Module Replacer.
.. _module_replacer:
Module Replacer framework
^^^^^^^^^^^^^^^^^^^^^^^^^
The Module Replacer framework is the original framework implementing new-style
modules, and is still used for PowerShell modules. It is essentially a preprocessor (like the C Preprocessor for those
familiar with that programming language). It does straight substitutions of
specific substring patterns in the module file. There are two types of
substitutions:
* Replacements that only happen in the module file. These are public
replacement strings that modules can utilize to get helpful boilerplate or
access to arguments.
- :code:`from ansible.module_utils.MOD_LIB_NAME import *` is replaced with the
contents of the :file:`ansible/module_utils/MOD_LIB_NAME.py` These should
only be used with :ref:`new-style Python modules <flow_python_modules>`.
- :code:`#<<INCLUDE_ANSIBLE_MODULE_COMMON>>` is equivalent to
:code:`from ansible.module_utils.basic import *` and should also only apply
to new-style Python modules.
- :code:`# POWERSHELL_COMMON` substitutes the contents of
:file:`ansible/module_utils/powershell.ps1`. It should only be used with
:ref:`new-style Powershell modules <flow_powershell_modules>`.
* Replacements that are used by ``ansible.module_utils`` code. These are internal replacement patterns. They may be used internally, in the above public replacements, but shouldn't be used directly by modules.
- :code:`"<<ANSIBLE_VERSION>>"` is substituted with the Ansible version. In
:ref:`new-style Python modules <flow_python_modules>` under the
:ref:`Ansiballz` framework the proper way is to instead instantiate an
`AnsibleModule` and then access the version from
:attr:``AnsibleModule.ansible_version``.
- :code:`"<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"` is substituted with
a string which is the Python ``repr`` of the :term:`JSON` encoded module
parameters. Using ``repr`` on the JSON string makes it safe to embed in
a Python file. In new-style Python modules under the Ansiballz framework
this is better accessed by instantiating an `AnsibleModule` and
then using :attr:`AnsibleModule.params`.
- :code:`<<SELINUX_SPECIAL_FILESYSTEMS>>` substitutes a string which is
a comma separated list of file systems which have a file system dependent
security context in SELinux. In new-style Python modules, if you really
need this you should instantiate an `AnsibleModule` and then use
:attr:`AnsibleModule._selinux_special_fs`. The variable has also changed
from a comma separated string of file system names to an actual python
list of filesystem names.
- :code:`<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>` substitutes the module
parameters as a JSON string. Care must be taken to properly quote the
string as JSON data may contain quotes. This pattern is not substituted
in new-style Python modules as they can get the module parameters another
way.
- The string :code:`syslog.LOG_USER` is replaced wherever it occurs with the
``syslog_facility`` which was named in :file:`ansible.cfg` or any
``ansible_syslog_facility`` inventory variable that applies to this host. In
new-style Python modules this has changed slightly. If you really need to
access it, you should instantiate an `AnsibleModule` and then use
:attr:`AnsibleModule._syslog_facility` to access it. It is no longer the
actual syslog facility and is now the name of the syslog facility. See
the :ref:`documentation on internal arguments <flow_internal_arguments>`
for details.
.. _Ansiballz:
Ansiballz framework
^^^^^^^^^^^^^^^^^^^
The Ansiballz framework was adopted in Ansible 2.1 and is used for all new-style Python modules. Unlike the Module Replacer, Ansiballz uses real Python imports of things in
:file:`ansible/module_utils` instead of merely preprocessing the module. It
does this by constructing a zipfile -- which includes the module file, files
in :file:`ansible/module_utils` that are imported by the module, and some
boilerplate to pass in the module's parameters. The zipfile is then Base64
encoded and wrapped in a small Python script which decodes the Base64 encoding
and places the zipfile into a temp directory on the managed node. It then
extracts just the Ansible module script from the zip file and places that in
the temporary directory as well. Then it sets the PYTHONPATH to find Python
modules inside of the zip file and imports the Ansible module as the special name, ``__main__``.
Importing it as ``__main__`` causes Python to think that it is executing a script rather than simply
importing a module. This lets Ansible run both the wrapper script and the module code in a single copy of Python on the remote machine.
.. note::
* Ansible wraps the zipfile in the Python script for two reasons:
* for compatibility with Python 2.6 which has a less
functional version of Python's ``-m`` command line switch.
* so that pipelining will function properly. Pipelining needs to pipe the
Python module into the Python interpreter on the remote node. Python
understands scripts on stdin but does not understand zip files.
* Prior to Ansible 2.7, the module was executed via a second Python interpreter instead of being
executed inside of the same process. This change was made once Python-2.4 support was dropped
to speed up module execution.
In Ansiballz, any imports of Python modules from the
:py:mod:`ansible.module_utils` package trigger inclusion of that Python file
into the zipfile. Instances of :code:`#<<INCLUDE_ANSIBLE_MODULE_COMMON>>` in
the module are turned into :code:`from ansible.module_utils.basic import *`
and :file:`ansible/module-utils/basic.py` is then included in the zipfile.
Files that are included from :file:`module_utils` are themselves scanned for
imports of other Python modules from :file:`module_utils` to be included in
the zipfile as well.
.. warning::
At present, the Ansiballz Framework cannot determine whether an import
should be included if it is a relative import. Always use an absolute
import that has :py:mod:`ansible.module_utils` in it to allow Ansiballz to
determine that the file should be included.
.. _flow_passing_module_args:
Passing args
------------
Arguments are passed differently by the two frameworks:
* In :ref:`module_replacer`, module arguments are turned into a JSON-ified string and substituted into the combined module file.
* In :ref:`Ansiballz`, the JSON-ified string is part of the script which wraps the zipfile. Just before the wrapper script imports the Ansible module as ``__main__``, it monkey-patches the private, ``_ANSIBLE_ARGS`` variable in ``basic.py`` with the variable values. When a :class:`ansible.module_utils.basic.AnsibleModule` is instantiated, it parses this string and places the args into :attr:`AnsibleModule.params` where it can be accessed by the module's other code.
.. warning::
If you are writing modules, remember that the way we pass arguments is an internal implementation detail: it has changed in the past and will change again as soon as changes to the common module_utils
code allow Ansible modules to forgo using :class:`ansible.module_utils.basic.AnsibleModule`. Do not rely on the internal global ``_ANSIBLE_ARGS`` variable.
Very dynamic custom modules which need to parse arguments before they
instantiate an ``AnsibleModule`` may use ``_load_params`` to retrieve those parameters.
Although ``_load_params`` may change in breaking ways if necessary to support
changes in the code, it is likely to be more stable than either the way we pass parameters or the internal global variable.
.. note::
Prior to Ansible 2.7, the Ansible module was invoked in a second Python interpreter and the
arguments were then passed to the script over the script's stdin.
.. _flow_internal_arguments:
Internal arguments
------------------
Both :ref:`module_replacer` and :ref:`Ansiballz` send additional arguments to
the module beyond those which the user specified in the playbook. These
additional arguments are internal parameters that help implement global
Ansible features. Modules often do not need to know about these explicitly as
the features are implemented in :py:mod:`ansible.module_utils.basic` but certain
features need support from the module so it's good to know about them.
The internal arguments listed here are global. If you need to add a local internal argument to a custom module, create an action plugin for that specific module - see ``_original_basename`` in the `copy action plugin <https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/copy.py#L329>`_ for an example.
_ansible_no_log
^^^^^^^^^^^^^^^
Boolean. Set to True whenever a parameter in a task or play specifies ``no_log``. Any module that calls :py:meth:`AnsibleModule.log` handles this automatically. If a module implements its own logging then
it needs to check this value. To access in a module, instantiate an
``AnsibleModule`` and then check the value of :attr:`AnsibleModule.no_log`.
.. note::
``no_log`` specified in a module's argument_spec is handled by a different mechanism.
_ansible_debug
^^^^^^^^^^^^^^^
Boolean. Turns more verbose logging on or off and turns on logging of
external commands that the module executes. If a module uses
:py:meth:`AnsibleModule.debug` rather than :py:meth:`AnsibleModule.log` then
the messages are only logged if ``_ansible_debug`` is set to ``True``.
To set, add ``debug: True`` to :file:`ansible.cfg` or set the environment
variable :envvar:`ANSIBLE_DEBUG`. To access in a module, instantiate an
``AnsibleModule`` and access :attr:`AnsibleModule._debug`.
_ansible_diff
^^^^^^^^^^^^^^^
Boolean. If a module supports it, tells the module to show a unified diff of
changes to be made to templated files. To set, pass the ``--diff`` command line
option. To access in a module, instantiate an `AnsibleModule` and access
:attr:`AnsibleModule._diff`.
_ansible_verbosity
^^^^^^^^^^^^^^^^^^
Unused. This value could be used for finer grained control over logging.
_ansible_selinux_special_fs
^^^^^^^^^^^^^^^^^^^^^^^^^^^
List. Names of filesystems which should have a special SELinux
context. They are used by the `AnsibleModule` methods which operate on
files (changing attributes, moving, and copying). To set, add a comma separated string of filesystem names in :file:`ansible.cfg`::
# ansible.cfg
[selinux]
special_context_filesystems=nfs,vboxsf,fuse,ramfs,vfat
Most modules can use the built-in ``AnsibleModule`` methods to manipulate
files. To access in a module that needs to know about these special context filesystems, instantiate an ``AnsibleModule`` and examine the list in
:attr:`AnsibleModule._selinux_special_fs`.
This replaces :attr:`ansible.module_utils.basic.SELINUX_SPECIAL_FS` from
:ref:`module_replacer`. In module replacer it was a comma separated string of
filesystem names. Under Ansiballz it's an actual list.
.. versionadded:: 2.1
_ansible_syslog_facility
^^^^^^^^^^^^^^^^^^^^^^^^
This parameter controls which syslog facility Ansible module logs to. To set, change the ``syslog_facility`` value in :file:`ansible.cfg`. Most
modules should just use :meth:`AnsibleModule.log` which will then make use of
this. If a module has to use this on its own, it should instantiate an
`AnsibleModule` and then retrieve the name of the syslog facility from
:attr:`AnsibleModule._syslog_facility`. The Ansiballz code is less hacky than the old :ref:`module_replacer` code:
.. code-block:: python
# Old module_replacer way
import syslog
syslog.openlog(NAME, 0, syslog.LOG_USER)
# New Ansiballz way
import syslog
facility_name = module._syslog_facility
facility = getattr(syslog, facility_name, syslog.LOG_USER)
syslog.openlog(NAME, 0, facility)
.. versionadded:: 2.1
_ansible_version
^^^^^^^^^^^^^^^^
This parameter passes the version of Ansible that runs the module. To access
it, a module should instantiate an `AnsibleModule` and then retrieve it
from :attr:`AnsibleModule.ansible_version`. This replaces
:attr:`ansible.module_utils.basic.ANSIBLE_VERSION` from
:ref:`module_replacer`.
.. versionadded:: 2.1
.. _flow_module_return_values:
Module return values & Unsafe strings
-------------------------------------
At the end of a module's execution, it formats the data that it wants to return as a JSON string and prints the string to its stdout. The normal action plugin receives the JSON string, parses it into a Python dictionary, and returns it to the executor.
If Ansible templated every string return value, it would be vulnerable to an attack from users with access to managed nodes. If an unscrupulous user disguised malicious code as Ansible return value strings, and if those strings were then templated on the controller, Ansible could execute arbitrary code. To prevent this scenario, Ansible marks all strings inside returned data as ``Unsafe``, emitting any Jinja2 templates in the strings verbatim, not expanded by Jinja2.
Strings returned by invoking a module through ``ActionPlugin._execute_module()`` are automatically marked as ``Unsafe`` by the normal action plugin. If another action plugin retrieves information from a module through some other means, it must mark its return data as ``Unsafe`` on its own.
In case a poorly-coded action plugin fails to mark its results as "Unsafe," Ansible audits the results again when they are returned to the executor,
marking all strings as ``Unsafe``. The normal action plugin protects itself and any other code that it calls with the result data as a parameter. The check inside the executor protects the output of all other action plugins, ensuring that subsequent tasks run by Ansible will not template anything from those results either.
.. _flow_special_considerations:
Special considerations
----------------------
.. _flow_pipelining:
Pipelining
^^^^^^^^^^
Ansible can transfer a module to a remote machine in one of two ways:
* it can write out the module to a temporary file on the remote host and then
use a second connection to the remote host to execute it with the
interpreter that the module needs
* or it can use what's known as pipelining to execute the module by piping it
into the remote interpreter's stdin.
Pipelining only works with modules written in Python at this time because
Ansible only knows that Python supports this mode of operation. Supporting
pipelining means that whatever format the module payload takes before being
sent over the wire must be executable by Python via stdin.
.. _flow_args_over_stdin:
Why pass args over stdin?
^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments via stdin was chosen for the following reasons:
* When combined with :ref:`ANSIBLE_PIPELINING`, this keeps the module's arguments from
temporarily being saved onto disk on the remote machine. This makes it
harder (but not impossible) for a malicious user on the remote machine to
steal any sensitive information that may be present in the arguments.
* Command line arguments would be insecure as most systems allow unprivileged
users to read the full commandline of a process.
* Environment variables are usually more secure than the commandline but some
systems limit the total size of the environment. This could lead to
truncation of the parameters if we hit that limit.
.. _flow_ansiblemodule:
AnsibleModule
-------------
.. _argument_spec:
Argument spec
^^^^^^^^^^^^^
The ``argument_spec`` provided to ``AnsibleModule`` defines the supported arguments for a module, as well as their type, defaults and more.
Example ``argument_spec``:
.. code-block:: python
module = AnsibleModule(argument_spec=dict(
top_level=dict(
type='dict',
options=dict(
second_level=dict(
default=True,
type='bool',
)
)
)
))
This section will discuss the behavioral attributes for arguments:
type
""""
``type`` allows you to define the type of the value accepted for the argument. The default value for ``type`` is ``str``. Possible values are:
* str
* list
* dict
* bool
* int
* float
* path
* raw
* jsonarg
* json
* bytes
* bits
The ``raw`` type, performs no type validation or type casing, and maintains the type of the passed value.
elements
""""""""
``elements`` works in combination with ``type`` when ``type='list'``. ``elements`` can then be defined as ``elements='int'`` or any other type, indicating that each element of the specified list should be of that type.
default
"""""""
The ``default`` option allows sets a default value for the argument for the scenario when the argument is not provided to the module. When not specified, the default value is ``None``.
fallback
""""""""
``fallback`` accepts a ``tuple`` where the first argument is a callable (function) that will be used to perform the lookup, based on the second argument. The second argument is a list of values to be accepted by the callable.
The most common callable used is ``env_fallback`` which will allow an argument to optionally use an environment variable when the argument is not supplied.
Example::
username=dict(fallback=(env_fallback, ['ANSIBLE_NET_USERNAME']))
choices
"""""""
``choices`` accepts a list of choices that the argument will accept. The types of ``choices`` should match the ``type``.
required
""""""""
``required`` accepts a boolean, either ``True`` or ``False`` that indicates that the argument is required. This should not be used in combination with ``default``.
no_log
""""""
``no_log`` indicates that the value of the argument should not be logged or displayed.
aliases
"""""""
``aliases`` accepts a list of alternative argument names for the argument, such as the case where the argument is ``name`` but the module accepts ``aliases=['pkg']`` to allow ``pkg`` to be interchangeably with ``name``
options
"""""""
``options`` implements the ability to create a sub-argument_spec, where the sub options of the top level argument are also validated using the attributes discussed in this section. The example at the top of this section demonstrates use of ``options``. ``type`` or ``elements`` should be ``dict`` is this case.
apply_defaults
""""""""""""""
``apply_defaults`` works alongside ``options`` and allows the ``default`` of the sub-options to be applied even when the top-level argument is not supplied.
In the example of the ``argument_spec`` at the top of this section, it would allow ``module.params['top_level']['second_level']`` to be defined, even if the user does not provide ``top_level`` when calling the module.
removed_in_version
""""""""""""""""""
``removed_in_version`` indicates which version of Ansible a deprecated argument will be removed in.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,656 |
Honor explicitly marking a parameter as not sensitive.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The way that the `no_log` parameter is currently implemented in `AnsibleModule._log_invocation` is overly simplistic, and appears to violate the principle of least astonishment due to its overly brief documentation failing to mention the corner cases for which it appears to have been written. As `no_log` is currently implemented, it will:
* Not log a parameter value if `no_log` is truthy.
* Not log a parameter value and complain if the parameter name matches `PASSWORD_MATCH` and is neither a `bool` or enum.
* Otherwise, log the parameter.
On the surface (as a module writer), the documentation (see: [Protecting sensitive data with `no_log`](https://docs.ansible.com/ansible/devel/reference_appendices/logging.html?highlight=no_log#protecting-sensitive-data-with-no-log)) states that by passing `no_log=True` you are explicitly marking a parameter as sensitive. This makes sense. It would then follow that by passing `no_log=False` you are explicitly marking a parameter as not sensitive. This is where the current implementation fails. There is currently no way (that I can see) for a module author that has properly developed and tested a module to avoid a warning regarding the `no_log` parameter if the parameter matches `PASSWORD_MATCH` (at least, not that I can see in the documentation or HEAD.)
I certainly understand the need to protect users from the whims of careless third party module authors (some of which may have a tenuous grasp on Python at best), but for those of us that write and maintain our own modules professionally it seems like a rather strange choice to provide no avenue for a knowledgeable module author to say, "no, pass_interval really is not a password, and no, there is no other name that would work without causing excessive end-user confusion".
The feature request is basically: can `AnsibleModule._log_invocation` be adjusted to actually honor `no_log=False` as the opposite of `no_log=True`, which it currently does not. I do not believe it would be difficult to implement while maintaining the current behavior where a module author that does not specify either way would be caught by `PASSWORD_MATCH`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/module_utils/basic.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This change would not be user facing (typically) unless you're a module author. It is needed because, at the current time, there is no way to mark a parameter as explicitly not sensitive (as `no_log=False` is not being honored effectively) and avoid erroneous messages in the logs. Some authors have been getting around this by choosing more obscure names that diverge from current documentation for their parameters, but you're basically hiding from the problem at that point, and confusing end-users.
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
Duplicate of a proposed solution to https://github.com/ansible/ansible/issues/49465
|
https://github.com/ansible/ansible/issues/64656
|
https://github.com/ansible/ansible/pull/64733
|
93bfa4b07ad30cce58a30ffda5db866895aad075
|
3ca4580cb4e2a24597c6c5108bf76bbcd48069f8
| 2019-11-10T15:28:11Z |
python
| 2020-01-09T21:47:57Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
has_journal = True
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
handle_aliases,
list_deprecations,
list_no_log_values,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(),
group=dict(),
seuser=dict(),
serole=dict(),
selevel=dict(),
setype=dict(),
attributes=dict(aliases=['attr']),
# The following are not about perms and should not be in a rewritten file_common_args
src=dict(), # Maybe dest or path would be appropriate but src is not
follow=dict(type='bool', default=False), # Maybe follow is appropriate because it determines whether to follow symlinks for permission purposes too
force=dict(type='bool'),
# not taken by the file module, but other action plugins call the file module so this ignores
# them for now. In the future, the caller should take care of removing these from the module
# arguments before calling the file module.
content=dict(no_log=True), # used by copy
backup=dict(), # Used by a few modules to create a remote backup before updating the file
remote_src=dict(), # used by assemble
regexp=dict(), # used by assemble
delimiter=dict(), # used by assemble
directory_mode=dict(), # used by copy
unsafe_writes=dict(type='bool'), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read module documentation and install in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
check_invalid_arguments=None, mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False, supports_check_mode=False,
required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
# Check whether code set this explicitly for deprecation purposes
if check_invalid_arguments is None:
check_invalid_arguments = True
module_set_check_invalid_arguments = False
else:
module_set_check_invalid_arguments = True
self.check_invalid_arguments = check_invalid_arguments
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._warnings = []
self._deprecations = []
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments(check_invalid_arguments)
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
# Do this at the end so that logging parameters have been set up
# This is to warn third party module authors that the functionality is going away.
# We exclude uri and zfs as they have their own deprecation warnings for users and we'll
# make sure to update their code to stop using check_invalid_arguments when 2.9 rolls around
if module_set_check_invalid_arguments and self._name not in ('uri', 'zfs'):
self.deprecate('Setting check_invalid_arguments is deprecated and will be removed.'
' Update the code for this module In the future, AnsibleModule will'
' always check for invalid arguments.', version='2.9')
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
if isinstance(warning, string_types):
self._warnings.append(warning)
self.log('[WARNING] %s' % warning)
else:
raise TypeError("warn requires a string not a %s" % type(warning))
def deprecate(self, msg, version=None):
if isinstance(msg, string_types):
self._deprecations.append({
'msg': msg,
'version': version
})
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
else:
raise TypeError("deprecate requires a string not a %s" % type(msg))
def load_file_common_arguments(self, params):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
'''
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (errno.EPERM, errno.EROFS): # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
self._warnings.append('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
self._deprecations.append(
{'msg': "Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
'version': deprecation['version']})
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
self._deprecations.extend(list_deprecations(spec, param))
def _check_arguments(self, check_invalid_arguments, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for k in list(param.keys()):
if check_invalid_arguments and k not in legal_inputs:
unsupported_parameters.add(k)
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in param:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(param[param_key]))
else:
setattr(self, PASS_VARS[k][0], param[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (', '.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value {0!r} (type {0.__class__.__name__}) in a string field was converted to {1!r} (type string). '
'If this does not look like what you expect, {2}').format(value, to_text(value), common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(self.check_invalid_arguments, spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
for value in values:
try:
validated_params.append(type_checker(value))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
try:
param[k] = type_checker(value)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', False)
if self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
# try to capture all passwords/passphrase named fields missed by no_log
elif PASSWORD_MATCH.search(param) and arg_opts.get('type', 'str') != 'bool' and not arg_opts.get('choices', False):
# skip boolean and enums as they are about 'password' state
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg, required, opt_dirs)
except ValueError as e:
self.fail_json(msg=to_text(e))
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
if self._warnings:
kwargs['warnings'] = self._warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version', None))
else:
self.deprecate(d)
else:
self.deprecate(kwargs['deprecations'])
if self._deprecations:
kwargs['deprecations'] = self._deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, **kwargs):
''' return from the module, with an error message '''
if 'msg' not in kwargs:
raise AssertionError("implementation error -- msg to explain the error is required")
kwargs['failed'] = True
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _read_from_pipes(self, rpipes, rfds, file_descriptor):
data = b('')
if file_descriptor in rfds:
data = os.read(file_descriptor.fileno(), self.get_buffer_size(file_descriptor))
if data == b(''):
rpipes.remove(file_descriptor)
return data
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on python3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b('')
stderr = b('')
rpipes = [cmd.stdout, cmd.stderr]
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
rfds, wfds, efds = select.select(rpipes, [], rpipes, 1)
stdout += self._read_from_pipes(rpipes, rfds, cmd.stdout)
stderr += self._read_from_pipes(rpipes, rfds, cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not rpipes or not rfds) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if rpipes is empty
elif not rpipes and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,656 |
Honor explicitly marking a parameter as not sensitive.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
The way that the `no_log` parameter is currently implemented in `AnsibleModule._log_invocation` is overly simplistic, and appears to violate the principle of least astonishment due to its overly brief documentation failing to mention the corner cases for which it appears to have been written. As `no_log` is currently implemented, it will:
* Not log a parameter value if `no_log` is truthy.
* Not log a parameter value and complain if the parameter name matches `PASSWORD_MATCH` and is neither a `bool` or enum.
* Otherwise, log the parameter.
On the surface (as a module writer), the documentation (see: [Protecting sensitive data with `no_log`](https://docs.ansible.com/ansible/devel/reference_appendices/logging.html?highlight=no_log#protecting-sensitive-data-with-no-log)) states that by passing `no_log=True` you are explicitly marking a parameter as sensitive. This makes sense. It would then follow that by passing `no_log=False` you are explicitly marking a parameter as not sensitive. This is where the current implementation fails. There is currently no way (that I can see) for a module author that has properly developed and tested a module to avoid a warning regarding the `no_log` parameter if the parameter matches `PASSWORD_MATCH` (at least, not that I can see in the documentation or HEAD.)
I certainly understand the need to protect users from the whims of careless third party module authors (some of which may have a tenuous grasp on Python at best), but for those of us that write and maintain our own modules professionally it seems like a rather strange choice to provide no avenue for a knowledgeable module author to say, "no, pass_interval really is not a password, and no, there is no other name that would work without causing excessive end-user confusion".
The feature request is basically: can `AnsibleModule._log_invocation` be adjusted to actually honor `no_log=False` as the opposite of `no_log=True`, which it currently does not. I do not believe it would be difficult to implement while maintaining the current behavior where a module author that does not specify either way would be caught by `PASSWORD_MATCH`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/module_utils/basic.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This change would not be user facing (typically) unless you're a module author. It is needed because, at the current time, there is no way to mark a parameter as explicitly not sensitive (as `no_log=False` is not being honored effectively) and avoid erroneous messages in the logs. Some authors have been getting around this by choosing more obscure names that diverge from current documentation for their parameters, but you're basically hiding from the problem at that point, and confusing end-users.
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
Duplicate of a proposed solution to https://github.com/ansible/ansible/issues/49465
|
https://github.com/ansible/ansible/issues/64656
|
https://github.com/ansible/ansible/pull/64733
|
93bfa4b07ad30cce58a30ffda5db866895aad075
|
3ca4580cb4e2a24597c6c5108bf76bbcd48069f8
| 2019-11-10T15:28:11Z |
python
| 2020-01-09T21:47:57Z |
test/units/module_utils/basic/test_argument_spec.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
import os
import pytest
from units.compat.mock import MagicMock
from ansible.module_utils import basic
from ansible.module_utils.six import integer_types
from ansible.module_utils.six.moves import builtins
MOCK_VALIDATOR_FAIL = MagicMock(side_effect=TypeError("bad conversion"))
# Data is argspec, argument, expected
VALID_SPECS = (
# Simple type=int
({'arg': {'type': 'int'}}, {'arg': 42}, 42),
# Simple type=int with a large value (will be of type long under Python 2)
({'arg': {'type': 'int'}}, {'arg': 18765432109876543210}, 18765432109876543210),
# Simple type=list, elements=int
({'arg': {'type': 'list', 'elements': 'int'}}, {'arg': [42, 32]}, [42, 32]),
# Type=int with conversion from string
({'arg': {'type': 'int'}}, {'arg': '42'}, 42),
# Type=list elements=int with conversion from string
({'arg': {'type': 'list', 'elements': 'int'}}, {'arg': ['42', '32']}, [42, 32]),
# Simple type=float
({'arg': {'type': 'float'}}, {'arg': 42.0}, 42.0),
# Simple type=list, elements=float
({'arg': {'type': 'list', 'elements': 'float'}}, {'arg': [42.1, 32.2]}, [42.1, 32.2]),
# Type=float conversion from int
({'arg': {'type': 'float'}}, {'arg': 42}, 42.0),
# type=list, elements=float conversion from int
({'arg': {'type': 'list', 'elements': 'float'}}, {'arg': [42, 32]}, [42.0, 32.0]),
# Type=float conversion from string
({'arg': {'type': 'float'}}, {'arg': '42.0'}, 42.0),
# type=list, elements=float conversion from string
({'arg': {'type': 'list', 'elements': 'float'}}, {'arg': ['42.1', '32.2']}, [42.1, 32.2]),
# Type=float conversion from string without decimal point
({'arg': {'type': 'float'}}, {'arg': '42'}, 42.0),
# Type=list elements=float conversion from string without decimal point
({'arg': {'type': 'list', 'elements': 'float'}}, {'arg': ['42', '32.2']}, [42.0, 32.2]),
# Simple type=bool
({'arg': {'type': 'bool'}}, {'arg': True}, True),
# Simple type=list elements=bool
({'arg': {'type': 'list', 'elements': 'bool'}}, {'arg': [True, 'true', 1, 'yes', False, 'false', 'no', 0]},
[True, True, True, True, False, False, False, False]),
# Type=int with conversion from string
({'arg': {'type': 'bool'}}, {'arg': 'yes'}, True),
# Type=str converts to string
({'arg': {'type': 'str'}}, {'arg': 42}, '42'),
# Type=list elements=str simple converts to string
({'arg': {'type': 'list', 'elements': 'str'}}, {'arg': ['42', '32']}, ['42', '32']),
# Type is implicit, converts to string
({'arg': {'type': 'str'}}, {'arg': 42}, '42'),
# Type=list elements=str implicit converts to string
({'arg': {'type': 'list', 'elements': 'str'}}, {'arg': [42, 32]}, ['42', '32']),
# parameter is required
({'arg': {'required': True}}, {'arg': 42}, '42'),
)
INVALID_SPECS = (
# Type is int; unable to convert this string
({'arg': {'type': 'int'}}, {'arg': "wolf"}, "is of type {0} and we were unable to convert to int: {0} cannot be converted to an int".format(type('bad'))),
# Type is list elements is int; unable to convert this string
({'arg': {'type': 'list', 'elements': 'int'}}, {'arg': [1, "bad"]}, "is of type {0} and we were unable to convert to int: {0} cannot be converted to "
"an int".format(type('int'))),
# Type is int; unable to convert float
({'arg': {'type': 'int'}}, {'arg': 42.1}, "'float'> cannot be converted to an int"),
# Type is list, elements is int; unable to convert float
({'arg': {'type': 'list', 'elements': 'int'}}, {'arg': [42.1, 32, 2]}, "'float'> cannot be converted to an int"),
# type is a callable that fails to convert
({'arg': {'type': MOCK_VALIDATOR_FAIL}}, {'arg': "bad"}, "bad conversion"),
# type is a list, elements is callable that fails to convert
({'arg': {'type': 'list', 'elements': MOCK_VALIDATOR_FAIL}}, {'arg': [1, "bad"]}, "bad conversion"),
# unknown parameter
({'arg': {'type': 'int'}}, {'other': 'bad', '_ansible_module_name': 'ansible_unittest'},
'Unsupported parameters for (ansible_unittest) module: other Supported parameters include: arg'),
# parameter is required
({'arg': {'required': True}}, {}, 'missing required arguments: arg'),
)
@pytest.fixture
def complex_argspec():
arg_spec = dict(
foo=dict(required=True, aliases=['dup']),
bar=dict(),
bam=dict(),
bing=dict(),
bang=dict(),
bong=dict(),
baz=dict(fallback=(basic.env_fallback, ['BAZ'])),
bar1=dict(type='bool'),
bar3=dict(type='list', elements='path'),
zardoz=dict(choices=['one', 'two']),
zardoz2=dict(type='list', choices=['one', 'two', 'three']),
zardoz3=dict(type='str', aliases=['zodraz'], deprecated_aliases=[dict(name='zodraz', version='9.99')]),
)
mut_ex = (('bar', 'bam'), ('bing', 'bang', 'bong'))
req_to = (('bam', 'baz'),)
kwargs = dict(
argument_spec=arg_spec,
mutually_exclusive=mut_ex,
required_together=req_to,
no_log=True,
add_file_common_args=True,
supports_check_mode=True,
)
return kwargs
@pytest.fixture
def options_argspec_list():
options_spec = dict(
foo=dict(required=True, aliases=['dup']),
bar=dict(),
bar1=dict(type='list', elements='str'),
bar2=dict(type='list', elements='int'),
bar3=dict(type='list', elements='float'),
bar4=dict(type='list', elements='path'),
bam=dict(),
baz=dict(fallback=(basic.env_fallback, ['BAZ'])),
bam1=dict(),
bam2=dict(default='test'),
bam3=dict(type='bool'),
bam4=dict(type='str'),
)
arg_spec = dict(
foobar=dict(
type='list',
elements='dict',
options=options_spec,
mutually_exclusive=[
['bam', 'bam1'],
],
required_if=[
['foo', 'hello', ['bam']],
['foo', 'bam2', ['bam2']]
],
required_one_of=[
['bar', 'bam']
],
required_together=[
['bam1', 'baz']
],
required_by={
'bam4': ('bam1', 'bam3'),
},
)
)
kwargs = dict(
argument_spec=arg_spec,
no_log=True,
add_file_common_args=True,
supports_check_mode=True
)
return kwargs
@pytest.fixture
def options_argspec_dict(options_argspec_list):
# should test ok, for options in dict format.
kwargs = options_argspec_list
kwargs['argument_spec']['foobar']['type'] = 'dict'
kwargs['argument_spec']['foobar']['elements'] = None
return kwargs
#
# Tests for one aspect of arg_spec
#
@pytest.mark.parametrize('argspec, expected, stdin', [(s[0], s[2], s[1]) for s in VALID_SPECS],
indirect=['stdin'])
def test_validator_basic_types(argspec, expected, stdin):
am = basic.AnsibleModule(argspec)
if 'type' in argspec['arg']:
if argspec['arg']['type'] == 'int':
type_ = integer_types
else:
type_ = getattr(builtins, argspec['arg']['type'])
else:
type_ = str
assert isinstance(am.params['arg'], type_)
assert am.params['arg'] == expected
@pytest.mark.parametrize('stdin', [{'arg': 42}, {'arg': 18765432109876543210}], indirect=['stdin'])
def test_validator_function(mocker, stdin):
# Type is a callable
MOCK_VALIDATOR_SUCCESS = mocker.MagicMock(return_value=27)
argspec = {'arg': {'type': MOCK_VALIDATOR_SUCCESS}}
am = basic.AnsibleModule(argspec)
assert isinstance(am.params['arg'], integer_types)
assert am.params['arg'] == 27
@pytest.mark.parametrize('argspec, expected, stdin', [(s[0], s[2], s[1]) for s in INVALID_SPECS],
indirect=['stdin'])
def test_validator_fail(stdin, capfd, argspec, expected):
with pytest.raises(SystemExit):
basic.AnsibleModule(argument_spec=argspec)
out, err = capfd.readouterr()
assert not err
assert expected in json.loads(out)['msg']
assert json.loads(out)['failed']
class TestComplexArgSpecs:
"""Test with a more complex arg_spec"""
@pytest.mark.parametrize('stdin', [{'foo': 'hello'}, {'dup': 'hello'}], indirect=['stdin'])
def test_complex_required(self, stdin, complex_argspec):
"""Test that the complex argspec works if we give it its required param as either the canonical or aliased name"""
am = basic.AnsibleModule(**complex_argspec)
assert isinstance(am.params['foo'], str)
assert am.params['foo'] == 'hello'
@pytest.mark.parametrize('stdin', [{'foo': 'hello1', 'dup': 'hello2'}], indirect=['stdin'])
def test_complex_duplicate_warning(self, stdin, complex_argspec):
"""Test that the complex argspec issues a warning if we specify an option both with its canonical name and its alias"""
am = basic.AnsibleModule(**complex_argspec)
assert isinstance(am.params['foo'], str)
assert 'Both option foo and its alias dup are set.' in am._warnings
assert am.params['foo'] == 'hello2'
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'bam': 'test'}], indirect=['stdin'])
def test_complex_type_fallback(self, mocker, stdin, complex_argspec):
"""Test that the complex argspec works if we get a required parameter via fallback"""
environ = os.environ.copy()
environ['BAZ'] = 'test data'
mocker.patch('ansible.module_utils.basic.os.environ', environ)
am = basic.AnsibleModule(**complex_argspec)
assert isinstance(am.params['baz'], str)
assert am.params['baz'] == 'test data'
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'bar': 'bad', 'bam': 'bad2', 'bing': 'a', 'bang': 'b', 'bong': 'c'}], indirect=['stdin'])
def test_fail_mutually_exclusive(self, capfd, stdin, complex_argspec):
"""Fail because of mutually exclusive parameters"""
with pytest.raises(SystemExit):
am = basic.AnsibleModule(**complex_argspec)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert results['msg'] == "parameters are mutually exclusive: bar|bam, bing|bang|bong"
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'bam': 'bad2'}], indirect=['stdin'])
def test_fail_required_together(self, capfd, stdin, complex_argspec):
"""Fail because only one of a required_together pair of parameters was specified"""
with pytest.raises(SystemExit):
am = basic.AnsibleModule(**complex_argspec)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert results['msg'] == "parameters are required together: bam, baz"
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'bar': 'hi'}], indirect=['stdin'])
def test_fail_required_together_and_default(self, capfd, stdin, complex_argspec):
"""Fail because one of a required_together pair of parameters has a default and the other was not specified"""
complex_argspec['argument_spec']['baz'] = {'default': 42}
with pytest.raises(SystemExit):
am = basic.AnsibleModule(**complex_argspec)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert results['msg'] == "parameters are required together: bam, baz"
@pytest.mark.parametrize('stdin', [{'foo': 'hello'}], indirect=['stdin'])
def test_fail_required_together_and_fallback(self, capfd, mocker, stdin, complex_argspec):
"""Fail because one of a required_together pair of parameters has a fallback and the other was not specified"""
environ = os.environ.copy()
environ['BAZ'] = 'test data'
mocker.patch('ansible.module_utils.basic.os.environ', environ)
with pytest.raises(SystemExit):
am = basic.AnsibleModule(**complex_argspec)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert results['msg'] == "parameters are required together: bam, baz"
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'zardoz2': ['one', 'four', 'five']}], indirect=['stdin'])
def test_fail_list_with_choices(self, capfd, mocker, stdin, complex_argspec):
"""Fail because one of the items is not in the choice"""
with pytest.raises(SystemExit):
basic.AnsibleModule(**complex_argspec)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert results['msg'] == "value of zardoz2 must be one or more of: one, two, three. Got no match for: four, five"
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'zardoz2': ['one', 'three']}], indirect=['stdin'])
def test_list_with_choices(self, capfd, mocker, stdin, complex_argspec):
"""Test choices with list"""
am = basic.AnsibleModule(**complex_argspec)
assert isinstance(am.params['zardoz2'], list)
assert am.params['zardoz2'] == ['one', 'three']
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'bar3': ['~/test', 'test/']}], indirect=['stdin'])
def test_list_with_elements_path(self, capfd, mocker, stdin, complex_argspec):
"""Test choices with list"""
am = basic.AnsibleModule(**complex_argspec)
assert isinstance(am.params['bar3'], list)
assert am.params['bar3'][0].startswith('/')
assert am.params['bar3'][1] == 'test/'
@pytest.mark.parametrize('stdin', [{'foo': 'hello', 'zodraz': 'one'}], indirect=['stdin'])
def test_deprecated_alias(self, capfd, mocker, stdin, complex_argspec):
"""Test a deprecated alias"""
am = basic.AnsibleModule(**complex_argspec)
assert "Alias 'zodraz' is deprecated." in am._deprecations[0]['msg']
assert am._deprecations[0]['version'] == '9.99'
class TestComplexOptions:
"""Test arg spec options"""
# (Parameters, expected value of module.params['foobar'])
OPTIONS_PARAMS_LIST = (
({'foobar': [{"foo": "hello", "bam": "good"}, {"foo": "test", "bar": "good"}]},
[{'foo': 'hello', 'bam': 'good', 'bam2': 'test', 'bar': None, 'baz': None, 'bam1': None, 'bam3': None, 'bam4': None,
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None},
{'foo': 'test', 'bam': None, 'bam2': 'test', 'bar': 'good', 'baz': None, 'bam1': None, 'bam3': None, 'bam4': None,
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}]
),
# Alias for required param
({'foobar': [{"dup": "test", "bar": "good"}]},
[{'foo': 'test', 'dup': 'test', 'bam': None, 'bam2': 'test', 'bar': 'good', 'baz': None, 'bam1': None, 'bam3': None, 'bam4': None,
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}]
),
# Required_if utilizing default value of the requirement
({'foobar': [{"foo": "bam2", "bar": "required_one_of"}]},
[{'bam': None, 'bam1': None, 'bam2': 'test', 'bam3': None, 'bam4': None, 'bar': 'required_one_of', 'baz': None, 'foo': 'bam2',
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}]
),
# Check that a bool option is converted
({"foobar": [{"foo": "required", "bam": "good", "bam3": "yes"}]},
[{'bam': 'good', 'bam1': None, 'bam2': 'test', 'bam3': True, 'bam4': None, 'bar': None, 'baz': None, 'foo': 'required',
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}]
),
# Check required_by options
({"foobar": [{"foo": "required", "bar": "good", "baz": "good", "bam4": "required_by", "bam1": "ok", "bam3": "yes"}]},
[{'bar': 'good', 'baz': 'good', 'bam1': 'ok', 'bam2': 'test', 'bam3': True, 'bam4': 'required_by', 'bam': None, 'foo': 'required',
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}]
),
# Check for elements in sub-options
({"foobar": [{"foo": "good", "bam": "required_one_of", "bar1": [1, "good", "yes"], "bar2": ['1', 1], "bar3":['1.3', 1.3, 1]}]},
[{'foo': 'good', 'bam1': None, 'bam2': 'test', 'bam3': None, 'bam4': None, 'bar': None, 'baz': None, 'bam': 'required_one_of',
'bar1': ["1", "good", "yes"], 'bar2': [1, 1], 'bar3': [1.3, 1.3, 1.0], 'bar4': None}]
),
)
# (Parameters, expected value of module.params['foobar'])
OPTIONS_PARAMS_DICT = (
({'foobar': {"foo": "hello", "bam": "good"}},
{'foo': 'hello', 'bam': 'good', 'bam2': 'test', 'bar': None, 'baz': None, 'bam1': None, 'bam3': None, 'bam4': None,
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}
),
# Alias for required param
({'foobar': {"dup": "test", "bar": "good"}},
{'foo': 'test', 'dup': 'test', 'bam': None, 'bam2': 'test', 'bar': 'good', 'baz': None, 'bam1': None, 'bam3': None, 'bam4': None,
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}
),
# Required_if utilizing default value of the requirement
({'foobar': {"foo": "bam2", "bar": "required_one_of"}},
{'bam': None, 'bam1': None, 'bam2': 'test', 'bam3': None, 'bam4': None, 'bar': 'required_one_of', 'baz': None, 'foo': 'bam2',
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}
),
# Check that a bool option is converted
({"foobar": {"foo": "required", "bam": "good", "bam3": "yes"}},
{'bam': 'good', 'bam1': None, 'bam2': 'test', 'bam3': True, 'bam4': None, 'bar': None, 'baz': None, 'foo': 'required',
'bar1': None, 'bar2': None, 'bar3': None, 'bar4': None}
),
# Check required_by options
({"foobar": {"foo": "required", "bar": "good", "baz": "good", "bam4": "required_by", "bam1": "ok", "bam3": "yes"}},
{'bar': 'good', 'baz': 'good', 'bam1': 'ok', 'bam2': 'test', 'bam3': True, 'bam4': 'required_by', 'bam': None,
'foo': 'required', 'bar1': None, 'bar3': None, 'bar2': None, 'bar4': None}
),
# Check for elements in sub-options
({"foobar": {"foo": "good", "bam": "required_one_of", "bar1": [1, "good", "yes"],
"bar2": ['1', 1], "bar3": ['1.3', 1.3, 1]}},
{'foo': 'good', 'bam1': None, 'bam2': 'test', 'bam3': None, 'bam4': None, 'bar': None,
'baz': None, 'bam': 'required_one_of',
'bar1': ["1", "good", "yes"], 'bar2': [1, 1], 'bar3': [1.3, 1.3, 1.0], 'bar4': None}
),
)
# (Parameters, failure message)
FAILING_PARAMS_LIST = (
# Missing required option
({'foobar': [{}]}, 'missing required arguments: foo found in foobar'),
# Invalid option
({'foobar': [{"foo": "hello", "bam": "good", "invalid": "bad"}]}, 'module: invalid found in foobar. Supported parameters include'),
# Mutually exclusive options found
({'foobar': [{"foo": "test", "bam": "bad", "bam1": "bad", "baz": "req_to"}]},
'parameters are mutually exclusive: bam|bam1 found in foobar'),
# required_if fails
({'foobar': [{"foo": "hello", "bar": "bad"}]},
'foo is hello but all of the following are missing: bam found in foobar'),
# Missing required_one_of option
({'foobar': [{"foo": "test"}]},
'one of the following is required: bar, bam found in foobar'),
# Missing required_together option
({'foobar': [{"foo": "test", "bar": "required_one_of", "bam1": "bad"}]},
'parameters are required together: bam1, baz found in foobar'),
# Missing required_by options
({'foobar': [{"foo": "test", "bar": "required_one_of", "bam4": "required_by"}]},
"missing parameter(s) required by 'bam4': bam1, bam3"),
)
# (Parameters, failure message)
FAILING_PARAMS_DICT = (
# Missing required option
({'foobar': {}}, 'missing required arguments: foo found in foobar'),
# Invalid option
({'foobar': {"foo": "hello", "bam": "good", "invalid": "bad"}},
'module: invalid found in foobar. Supported parameters include'),
# Mutually exclusive options found
({'foobar': {"foo": "test", "bam": "bad", "bam1": "bad", "baz": "req_to"}},
'parameters are mutually exclusive: bam|bam1 found in foobar'),
# required_if fails
({'foobar': {"foo": "hello", "bar": "bad"}},
'foo is hello but all of the following are missing: bam found in foobar'),
# Missing required_one_of option
({'foobar': {"foo": "test"}},
'one of the following is required: bar, bam found in foobar'),
# Missing required_together option
({'foobar': {"foo": "test", "bar": "required_one_of", "bam1": "bad"}},
'parameters are required together: bam1, baz found in foobar'),
# Missing required_by options
({'foobar': {"foo": "test", "bar": "required_one_of", "bam4": "required_by"}},
"missing parameter(s) required by 'bam4': bam1, bam3"),
)
@pytest.mark.parametrize('stdin, expected', OPTIONS_PARAMS_DICT, indirect=['stdin'])
def test_options_type_dict(self, stdin, options_argspec_dict, expected):
"""Test that a basic creation with required and required_if works"""
# should test ok, tests basic foo requirement and required_if
am = basic.AnsibleModule(**options_argspec_dict)
assert isinstance(am.params['foobar'], dict)
assert am.params['foobar'] == expected
@pytest.mark.parametrize('stdin, expected', OPTIONS_PARAMS_LIST, indirect=['stdin'])
def test_options_type_list(self, stdin, options_argspec_list, expected):
"""Test that a basic creation with required and required_if works"""
# should test ok, tests basic foo requirement and required_if
am = basic.AnsibleModule(**options_argspec_list)
assert isinstance(am.params['foobar'], list)
assert am.params['foobar'] == expected
@pytest.mark.parametrize('stdin, expected', FAILING_PARAMS_DICT, indirect=['stdin'])
def test_fail_validate_options_dict(self, capfd, stdin, options_argspec_dict, expected):
"""Fail because one of a required_together pair of parameters has a default and the other was not specified"""
with pytest.raises(SystemExit):
am = basic.AnsibleModule(**options_argspec_dict)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert expected in results['msg']
@pytest.mark.parametrize('stdin, expected', FAILING_PARAMS_LIST, indirect=['stdin'])
def test_fail_validate_options_list(self, capfd, stdin, options_argspec_list, expected):
"""Fail because one of a required_together pair of parameters has a default and the other was not specified"""
with pytest.raises(SystemExit):
am = basic.AnsibleModule(**options_argspec_list)
out, err = capfd.readouterr()
results = json.loads(out)
assert results['failed']
assert expected in results['msg']
@pytest.mark.parametrize('stdin', [{'foobar': {'foo': 'required', 'bam1': 'test', 'bar': 'case'}}], indirect=['stdin'])
def test_fallback_in_option(self, mocker, stdin, options_argspec_dict):
"""Test that the complex argspec works if we get a required parameter via fallback"""
environ = os.environ.copy()
environ['BAZ'] = 'test data'
mocker.patch('ansible.module_utils.basic.os.environ', environ)
am = basic.AnsibleModule(**options_argspec_dict)
assert isinstance(am.params['foobar']['baz'], str)
assert am.params['foobar']['baz'] == 'test data'
@pytest.mark.parametrize('stdin',
[{'foobar': {'foo': 'required', 'bam1': 'test', 'baz': 'data', 'bar': 'case', 'bar4': '~/test'}}],
indirect=['stdin'])
def test_elements_path_in_option(self, mocker, stdin, options_argspec_dict):
"""Test that the complex argspec works with elements path type"""
am = basic.AnsibleModule(**options_argspec_dict)
assert isinstance(am.params['foobar']['bar4'][0], str)
assert am.params['foobar']['bar4'][0].startswith('/')
@pytest.mark.parametrize('stdin,spec,expected', [
({},
{'one': {'type': 'dict', 'apply_defaults': True, 'options': {'two': {'default': True, 'type': 'bool'}}}},
{'two': True}),
({},
{'one': {'type': 'dict', 'options': {'two': {'default': True, 'type': 'bool'}}}},
None),
], indirect=['stdin'])
def test_subspec_not_required_defaults(self, stdin, spec, expected):
# Check that top level not required, processed subspec defaults
am = basic.AnsibleModule(spec)
assert am.params['one'] == expected
class TestLoadFileCommonArguments:
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_smoketest_load_file_common_args(self, am):
"""With no file arguments, an empty dict is returned"""
am.selinux_mls_enabled = MagicMock()
am.selinux_mls_enabled.return_value = True
am.selinux_default_context = MagicMock()
am.selinux_default_context.return_value = 'unconfined_u:object_r:default_t:s0'.split(':', 3)
assert am.load_file_common_arguments(params={}) == {}
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_load_file_common_args(self, am, mocker):
am.selinux_mls_enabled = MagicMock()
am.selinux_mls_enabled.return_value = True
am.selinux_default_context = MagicMock()
am.selinux_default_context.return_value = 'unconfined_u:object_r:default_t:s0'.split(':', 3)
base_params = dict(
path='/path/to/file',
mode=0o600,
owner='root',
group='root',
seuser='_default',
serole='_default',
setype='_default',
selevel='_default',
)
extended_params = base_params.copy()
extended_params.update(dict(
follow=True,
foo='bar',
))
final_params = base_params.copy()
final_params.update(dict(
path='/path/to/real_file',
secontext=['unconfined_u', 'object_r', 'default_t', 's0'],
attributes=None,
))
# with the proper params specified, the returned dictionary should represent
# only those params which have something to do with the file arguments, excluding
# other params and updated as required with proper values which may have been
# massaged by the method
mocker.patch('os.path.islink', return_value=True)
mocker.patch('os.path.realpath', return_value='/path/to/real_file')
res = am.load_file_common_arguments(params=extended_params)
assert res == final_params
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,598 |
Document Network Resource Module States
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
- We need a consolidated document that describes how the newly introduced `states` in the Network Resource Modules work.
- Since the behaviour of these states is consistent across all resource modules for all the supported platforms, we can have one single document and add a link to this document in all the individual RMs.
- States that need to be documented:
(a) merged
(b) replaced
(c) overridden
(d) deleted
(e) gathered
(f) rendered
(g) parsed
- We also need to document the consequences of running `{{ network_os }}_l3_interfaces` with state `overridden` on management interfaces of the devices.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lib/modules/network/ios/*
lib/modules/network/iosxr/*
lib/modules/network/nxos/*
lib/modules/network/vyos/*
lib/modules/network/eos/*
lib/modules/network/junos/*
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/65598
|
https://github.com/ansible/ansible/pull/66226
|
ce66743b10af52634664d5a10bd83dad2573cd46
|
2ad6055efd732a7e31e539033352d2eb9d1f2cf8
| 2019-12-06T08:18:45Z |
python
| 2020-01-10T20:12:26Z |
docs/docsite/rst/network/user_guide/index.rst
|
.. _network_advanced:
**********************************
Network Automation Advanced Topics
**********************************
Once you have mastered the basics of network automation with Ansible, as presented in :ref:`network_getting_started`, use this guide understand platform-specific details, optimization, and troubleshooting tips for Ansible for network automation.
**Who should use this guide?**
This guide is intended for network engineers using Ansible for automation. It covers advanced topics. If you understand networks and Ansible, this guide is for you. You may read through the entire guide if you choose, or use the links below to find the specific information you need.
If you're new to Ansible, or new to using Ansible for network automation, start with the :ref:`network_getting_started`.
.. toctree::
:maxdepth: 2
:caption: Advanced Topics
faq
network_best_practices_2.5
network_debug_troubleshooting
network_working_with_command_output
platform_index
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,598 |
Document Network Resource Module States
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
- We need a consolidated document that describes how the newly introduced `states` in the Network Resource Modules work.
- Since the behaviour of these states is consistent across all resource modules for all the supported platforms, we can have one single document and add a link to this document in all the individual RMs.
- States that need to be documented:
(a) merged
(b) replaced
(c) overridden
(d) deleted
(e) gathered
(f) rendered
(g) parsed
- We also need to document the consequences of running `{{ network_os }}_l3_interfaces` with state `overridden` on management interfaces of the devices.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lib/modules/network/ios/*
lib/modules/network/iosxr/*
lib/modules/network/nxos/*
lib/modules/network/vyos/*
lib/modules/network/eos/*
lib/modules/network/junos/*
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/65598
|
https://github.com/ansible/ansible/pull/66226
|
ce66743b10af52634664d5a10bd83dad2573cd46
|
2ad6055efd732a7e31e539033352d2eb9d1f2cf8
| 2019-12-06T08:18:45Z |
python
| 2020-01-10T20:12:26Z |
docs/docsite/rst/network/user_guide/network_resource_modules.rst
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,490 |
Documentation: heading styles look too similar
|
##### SUMMARY
The different heading styles are hard to distinguish visually, especially h3 and h4.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
documentation
##### ANSIBLE VERSION
```paste below
2.9
```
|
https://github.com/ansible/ansible/issues/65490
|
https://github.com/ansible/ansible/pull/66253
|
2ad6055efd732a7e31e539033352d2eb9d1f2cf8
|
ffc1f33f2ade86597ad62696efc2c130d2aa24c0
| 2019-12-04T06:06:04Z |
python
| 2020-01-10T20:13:48Z |
docs/docsite/_static/ansible.css
|
/*! minified with http://css-minify.online-domain-tools.com/ - all comments
* must have ! to preserve during minifying with that tool *//*! Fix for read the docs theme:
* https://rackerlabs.github.io/docs-rackspace/tools/rtd-tables.html
*//*! override table width restrictions */@media screen and (min-width:767px){/*! If we ever publish to read the docs, we need to use !important for these
* two styles as read the docs itself loads their theme in a way that we
* can't otherwise override it.
*/.wy-table-responsive table td{white-space:normal}.wy-table-responsive{overflow:visible}}/*!
* We use the class documentation-table for attribute tables where the first
* column is the name of an attribute and the second column is the description.
*//*! These tables look like this:
*
* Attribute Name Description
* -------------- -----------
* **NAME** This is a multi-line description
* str/required that can span multiple lines
* added in x.y
* With multiple paragraphs
* -------------- -----------
*
* **NAME** is given the class .value-name
* str is given the class .value-type
* / is given the class .value-separator
* required is given the class .value-required
* added in x.y is given the class .value-added-in
*//*! The extra .rst-content is so this will override rtd theme */.rst-content table.documentation-table td{vertical-align:top}table.documentation-table td:first-child{white-space:nowrap;vertical-align:top}table.documentation-table td:first-child p:first-child{font-weight:700;display:inline}/*! This is now redundant with above position-based styling *//*!
table.documentation-table .value-name {
font-weight: bold;
display: inline;
}
*/table.documentation-table .value-type{font-size:x-small;color:purple;display:inline}table.documentation-table .value-separator{font-size:x-small;display:inline}table.documentation-table .value-required{font-size:x-small;color:red;display:inline} .value-added-in{font-size:x-small;font-style:italic;color:green;display:inline}/*! Ansible-specific CSS pulled out of rtd theme for 2.9 */.DocSiteProduct-header{flex:1;-webkit-flex:1;padding:10px 20px 20px;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;align-items:center;-webkit-align-items:center;justify-content:flex-start;-webkit-justify-content:flex-start;margin-left:20px;margin-right:20px;text-decoration:none;font-weight:400;font-family:'Open Sans',sans-serif}.DocSiteProduct-header:active,.DocSiteProduct-header:focus,.DocSiteProduct-header:visited{color:#fff}.DocSiteProduct-header--core{font-size:25px;background-color:#5bbdbf;border:2px solid #5bbdbf;border-top-left-radius:4px;border-top-right-radius:4px;color:#fff;padding-left:2px;margin-left:2px}.DocSiteProduct-headerAlign{width:100%}.DocSiteProduct-logo{width:60px;height:60px;margin-bottom:-9px}.DocSiteProduct-logoText{margin-top:6px;font-size:25px;text-align:left}.DocSiteProduct-CheckVersionPara{margin-left:2px;padding-bottom:4px;margin-right:2px;margin-bottom:10px}/*! Ansible color scheme */.wy-nav-top,.wy-side-nav-search{background-color:#5bbdbf}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#5bbdbf}.wy-menu-vertical a{padding:0}.wy-menu-vertical a.reference.internal{padding:.4045em 1.618em}/*! Override sphinx rtd theme max-with of 800px */.wy-nav-content{max-width:100%}/*! Override sphinx_rtd_theme - keeps left-nav from overwriting Documentation title */.wy-nav-side{top:45px}/*! Ansible - changed absolute to relative to remove extraneous side scroll bar */.wy-grid-for-nav{position:relative}/*! Ansible - remove so highlight indenting is correct */.rst-content .highlighted{padding:0}.DocSiteBanner{display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;flex-wrap:wrap;-webkit-flex-wrap:wrap;margin-bottom:25px}.DocSiteBanner-imgWrapper{max-width:100%}td,th{min-width:100px}table{overflow-x:auto;display:block;max-width:100%}.documentation-table td.elbow-placeholder{border-left:1px solid #000;border-top:0;width:30px;min-width:30px}.documentation-table td,.documentation-table th{padding:4px;border-left:1px solid #000;border-top:1px solid #000}.documentation-table{border-right:1px solid #000;border-bottom:1px solid #000}@media print{*{background:0 0!important;color:#000!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}#nav,a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}/*! Don't show links for images, or javascript/internal links */pre,blockquote{border:0 solid #999;page-break-inside:avoid}thead{display:table-header-group}/*! h5bp.com/t */tr,img{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}#google_image_div,.DocSiteBanner{display:none!important}}#sideBanner,.DocSite-globalNav{display:none}.DocSite-sideNav{display:block;margin-bottom:40px}.DocSite-nav{display:none}.ansibleNav{background:#000;padding:0 20px;width:auto;border-bottom:1px solid #444;font-size:14px;z-index:1}.ansibleNav ul{list-style:none;padding-left:0;margin-top:0}.ansibleNav ul li{padding:7px 0;border-bottom:1px solid #444}.ansibleNav ul li:last-child{border:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:6px 0}.ansibleNav ul li a:hover{color:#5bbdbf;background:0 0}@media screen and (min-width:768px){.DocSite-globalNav{display:block;position:fixed}#sideBanner{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:'Open Sans',sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed;z-index:1}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px;z-index:1}.ansibleNav{height:45px;width:100%;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}}@media screen and (min-width:768px){#sideBanner,.DocSite-globalNav{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:'Open Sans',sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px}.ansibleNav{height:45px;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,983 |
Fixes for Redfish resource_id option
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
PR #62921 added a new `resource_id` option for the redfish_command and redfish_config modules. In that PR, the old, limited behavior of selecting the first System, Manager or Chassis in a collection is deprecated in favor of using the new option to explicitly specify the target resource. The version specified in the call to AnsibleModule.deprecate() was '2.13'. But since #62921 did not get merged in time for the 2.9 release, it should have used version '2.14' ('2.10' + 4).
https://github.com/ansible/ansible/blob/32a8b620f399fdc9698bd31ea1e619b2eb72b666/lib/ansible/module_utils/redfish_utils.py#L246-L247
In addition, the PR should have included:
- a changelog fragment with minor_changes and deprecated_features sections
- an update to the porting guide with instruction on how to update playbooks to avoid the deprecated behavior
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_utils.py
redfish_config.py
redfish_command.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdodd/.ansible.cfg
configured module search path = ['/Users/bdodd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdodd/Development/git/ansible/lib/ansible
executable location = /Users/bdodd/Development/git/ansible/bin/ansible
python version = 3.6.5 (default, Apr 25 2018, 14:26:36) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Environments using Redfish services
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: myhosts
connection: local
name: Manage System Power - Reboot
gather_facts: False
tasks:
- name: Reboot system power
redfish_command:
category: Systems
command: PowerReboot
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
```
Use the playbook above on a Redfish service that has multiple Systems.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The deprecation message should specify version '2.14'.
The changelog fragment and porting guide addition should be present.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The deprecation message specifies version '2.13'.
Also, note that the changelog fragment and porting guide addition described above are missing.
|
https://github.com/ansible/ansible/issues/65983
|
https://github.com/ansible/ansible/pull/66060
|
1a0724fdd41f77412a40d66ffddf231e733082b5
|
5f966ef6641cd931cd2be187950371ebdb2ac0c8
| 2019-12-19T20:01:38Z |
python
| 2020-01-10T22:37:53Z |
changelogs/fragments/66060-redfish-new-resource-id-option.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,983 |
Fixes for Redfish resource_id option
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
PR #62921 added a new `resource_id` option for the redfish_command and redfish_config modules. In that PR, the old, limited behavior of selecting the first System, Manager or Chassis in a collection is deprecated in favor of using the new option to explicitly specify the target resource. The version specified in the call to AnsibleModule.deprecate() was '2.13'. But since #62921 did not get merged in time for the 2.9 release, it should have used version '2.14' ('2.10' + 4).
https://github.com/ansible/ansible/blob/32a8b620f399fdc9698bd31ea1e619b2eb72b666/lib/ansible/module_utils/redfish_utils.py#L246-L247
In addition, the PR should have included:
- a changelog fragment with minor_changes and deprecated_features sections
- an update to the porting guide with instruction on how to update playbooks to avoid the deprecated behavior
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_utils.py
redfish_config.py
redfish_command.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdodd/.ansible.cfg
configured module search path = ['/Users/bdodd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdodd/Development/git/ansible/lib/ansible
executable location = /Users/bdodd/Development/git/ansible/bin/ansible
python version = 3.6.5 (default, Apr 25 2018, 14:26:36) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Environments using Redfish services
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: myhosts
connection: local
name: Manage System Power - Reboot
gather_facts: False
tasks:
- name: Reboot system power
redfish_command:
category: Systems
command: PowerReboot
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
```
Use the playbook above on a Redfish service that has multiple Systems.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The deprecation message should specify version '2.14'.
The changelog fragment and porting guide addition should be present.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The deprecation message specifies version '2.13'.
Also, note that the changelog fragment and porting guide addition described above are missing.
|
https://github.com/ansible/ansible/issues/65983
|
https://github.com/ansible/ansible/pull/66060
|
1a0724fdd41f77412a40d66ffddf231e733082b5
|
5f966ef6641cd931cd2be187950371ebdb2ac0c8
| 2019-12-19T20:01:38Z |
python
| 2020-01-10T22:37:53Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
Plugins
=======
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,983 |
Fixes for Redfish resource_id option
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
PR #62921 added a new `resource_id` option for the redfish_command and redfish_config modules. In that PR, the old, limited behavior of selecting the first System, Manager or Chassis in a collection is deprecated in favor of using the new option to explicitly specify the target resource. The version specified in the call to AnsibleModule.deprecate() was '2.13'. But since #62921 did not get merged in time for the 2.9 release, it should have used version '2.14' ('2.10' + 4).
https://github.com/ansible/ansible/blob/32a8b620f399fdc9698bd31ea1e619b2eb72b666/lib/ansible/module_utils/redfish_utils.py#L246-L247
In addition, the PR should have included:
- a changelog fragment with minor_changes and deprecated_features sections
- an update to the porting guide with instruction on how to update playbooks to avoid the deprecated behavior
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_utils.py
redfish_config.py
redfish_command.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdodd/.ansible.cfg
configured module search path = ['/Users/bdodd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdodd/Development/git/ansible/lib/ansible
executable location = /Users/bdodd/Development/git/ansible/bin/ansible
python version = 3.6.5 (default, Apr 25 2018, 14:26:36) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Environments using Redfish services
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: myhosts
connection: local
name: Manage System Power - Reboot
gather_facts: False
tasks:
- name: Reboot system power
redfish_command:
category: Systems
command: PowerReboot
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
```
Use the playbook above on a Redfish service that has multiple Systems.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The deprecation message should specify version '2.14'.
The changelog fragment and porting guide addition should be present.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The deprecation message specifies version '2.13'.
Also, note that the changelog fragment and porting guide addition described above are missing.
|
https://github.com/ansible/ansible/issues/65983
|
https://github.com/ansible/ansible/pull/66060
|
1a0724fdd41f77412a40d66ffddf231e733082b5
|
5f966ef6641cd931cd2be187950371ebdb2ac0c8
| 2019-12-19T20:01:38Z |
python
| 2020-01-10T22:37:53Z |
lib/ansible/module_utils/redfish_utils.py
|
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
from ansible.module_utils.urls import open_url
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves import http_client
from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError
GET_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
POST_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
PATCH_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
DELETE_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
DEPRECATE_MSG = 'Issuing a data modification command without specifying the '\
'ID of the target %(resource)s resource when there is more '\
'than one %(resource)s will use the first one in the '\
'collection. Use the `resource_id` option to specify the '\
'target %(resource)s ID'
class RedfishUtils(object):
def __init__(self, creds, root_uri, timeout, module, resource_id=None,
data_modification=False):
self.root_uri = root_uri
self.creds = creds
self.timeout = timeout
self.module = module
self.service_root = '/redfish/v1/'
self.resource_id = resource_id
self.data_modification = data_modification
self._init_session()
# The following functions are to send GET/POST/PATCH/DELETE requests
def get_request(self, uri):
try:
resp = open_url(uri, method="GET", headers=GET_HEADERS,
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
data = json.loads(resp.read())
headers = dict((k.lower(), v) for (k, v) in resp.info().items())
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on GET request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on GET request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed GET request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'data': data, 'headers': headers}
def post_request(self, uri, pyld):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=POST_HEADERS, method="POST",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on POST request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on POST request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def patch_request(self, uri, pyld):
headers = PATCH_HEADERS
r = self.get_request(uri)
if r['ret']:
# Get etag from etag header or @odata.etag property
etag = r['headers'].get('etag')
if not etag:
etag = r['data'].get('@odata.etag')
if etag:
# Make copy of headers and add If-Match header
headers = dict(headers)
headers['If-Match'] = etag
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=headers, method="PATCH",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on PATCH request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on PATCH request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed PATCH request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def delete_request(self, uri, pyld=None):
try:
data = json.dumps(pyld) if pyld else None
resp = open_url(uri, data=data,
headers=DELETE_HEADERS, method="DELETE",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on DELETE request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on DELETE request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed DELETE request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
@staticmethod
def _get_extended_message(error):
"""
Get Redfish ExtendedInfo message from response payload if present
:param error: an HTTPError exception
:type error: HTTPError
:return: the ExtendedInfo message if present, else standard HTTP error
"""
msg = http_client.responses.get(error.code, '')
if error.code >= 400:
try:
body = error.read().decode('utf-8')
data = json.loads(body)
ext_info = data['error']['@Message.ExtendedInfo']
msg = ext_info[0]['Message']
except Exception:
pass
return msg
def _init_session(self):
pass
def _find_accountservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
else:
account_service = data["AccountService"]["@odata.id"]
response = self.get_request(self.root_uri + account_service)
if response['ret'] is False:
return response
data = response['data']
accounts = data['Accounts']['@odata.id']
if accounts[-1:] == '/':
accounts = accounts[:-1]
self.accounts_uri = accounts
return {'ret': True}
def _find_sessionservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'SessionService' not in data:
return {'ret': False, 'msg': "SessionService resource not found"}
else:
session_service = data["SessionService"]["@odata.id"]
response = self.get_request(self.root_uri + session_service)
if response['ret'] is False:
return response
data = response['data']
sessions = data['Sessions']['@odata.id']
if sessions[-1:] == '/':
sessions = sessions[:-1]
self.sessions_uri = sessions
return {'ret': True}
def _get_resource_uri_by_id(self, uris, id_prop):
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
if id_prop == data.get('Id'):
return uri
return None
def _find_systems_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Systems' not in data:
return {'ret': False, 'msg': "Systems resource not found"}
response = self.get_request(self.root_uri + data['Systems']['@odata.id'])
if response['ret'] is False:
return response
self.systems_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.systems_uris:
return {
'ret': False,
'msg': "ComputerSystem's Members array is either empty or missing"}
self.systems_uri = self.systems_uris[0]
if self.data_modification:
if self.resource_id:
self.systems_uri = self._get_resource_uri_by_id(self.systems_uris,
self.resource_id)
if not self.systems_uri:
return {
'ret': False,
'msg': "System resource %s not found" % self.resource_id}
elif len(self.systems_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'System'},
version='2.13')
return {'ret': True}
def _find_updateservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'UpdateService' not in data:
return {'ret': False, 'msg': "UpdateService resource not found"}
else:
update = data["UpdateService"]["@odata.id"]
self.update_uri = update
response = self.get_request(self.root_uri + update)
if response['ret'] is False:
return response
data = response['data']
self.firmware_uri = self.software_uri = None
if 'FirmwareInventory' in data:
self.firmware_uri = data['FirmwareInventory'][u'@odata.id']
if 'SoftwareInventory' in data:
self.software_uri = data['SoftwareInventory'][u'@odata.id']
return {'ret': True}
def _find_chassis_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Chassis' not in data:
return {'ret': False, 'msg': "Chassis resource not found"}
chassis = data["Chassis"]["@odata.id"]
response = self.get_request(self.root_uri + chassis)
if response['ret'] is False:
return response
self.chassis_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.chassis_uris:
return {'ret': False,
'msg': "Chassis Members array is either empty or missing"}
self.chassis_uri = self.chassis_uris[0]
if self.data_modification:
if self.resource_id:
self.chassis_uri = self._get_resource_uri_by_id(self.chassis_uris,
self.resource_id)
if not self.chassis_uri:
return {
'ret': False,
'msg': "Chassis resource %s not found" % self.resource_id}
elif len(self.chassis_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Chassis'},
version='2.13')
return {'ret': True}
def _find_managers_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Managers' not in data:
return {'ret': False, 'msg': "Manager resource not found"}
manager = data["Managers"]["@odata.id"]
response = self.get_request(self.root_uri + manager)
if response['ret'] is False:
return response
self.manager_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.manager_uris:
return {'ret': False,
'msg': "Managers Members array is either empty or missing"}
self.manager_uri = self.manager_uris[0]
if self.data_modification:
if self.resource_id:
self.manager_uri = self._get_resource_uri_by_id(self.manager_uris,
self.resource_id)
if not self.manager_uri:
return {
'ret': False,
'msg': "Manager resource %s not found" % self.resource_id}
elif len(self.manager_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Manager'},
version='2.13')
return {'ret': True}
def get_logs(self):
log_svcs_uri_list = []
list_of_logs = []
properties = ['Severity', 'Created', 'EntryType', 'OemRecordFormat',
'Message', 'MessageId', 'MessageArgs']
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data.get('Members', []):
response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id'])
if response['ret'] is False:
return response
_data = response['data']
if 'Entries' in _data:
log_svcs_uri_list.append(_data['Entries'][u'@odata.id'])
# For each entry in LogServices, get log name and all log entries
for log_svcs_uri in log_svcs_uri_list:
logs = {}
list_of_log_entries = []
response = self.get_request(self.root_uri + log_svcs_uri)
if response['ret'] is False:
return response
data = response['data']
logs['Description'] = data.get('Description',
'Collection of log entries')
# Get all log entries for each type of log found
for logEntry in data.get('Members', []):
entry = {}
for prop in properties:
if prop in logEntry:
entry[prop] = logEntry.get(prop)
if entry:
list_of_log_entries.append(entry)
log_name = log_svcs_uri.split('/')[-1]
logs[log_name] = list_of_log_entries
list_of_logs.append(logs)
# list_of_logs[logs{list_of_log_entries[entry{}]}]
return {'ret': True, 'entries': list_of_logs}
def clear_logs(self):
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data[u'Members']:
response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"])
if response['ret'] is False:
return response
_data = response['data']
# Check to make sure option is available, otherwise error is ugly
if "Actions" in _data:
if "#LogService.ClearLog" in _data[u"Actions"]:
self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {})
if response['ret'] is False:
return response
return {'ret': True}
def aggregate(self, func, uri_list, uri_name):
ret = True
entries = []
for uri in uri_list:
inventory = func(uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({uri_name: uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def aggregate_chassis(self, func):
return self.aggregate(func, self.chassis_uris, 'chassis_uri')
def aggregate_managers(self, func):
return self.aggregate(func, self.manager_uris, 'manager_uri')
def aggregate_systems(self, func):
return self.aggregate(func, self.systems_uris, 'system_uri')
def get_storage_controller_inventory(self, systems_uri):
result = {}
controller_list = []
controller_results = []
# Get these entries, but does not fail if not found
properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers',
'Location', 'Manufacturer', 'Model', 'Name',
'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status']
key = "StorageControllers"
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'Storage' not in data:
return {'ret': False, 'msg': "Storage resource not found"}
# Get a list of all storage controllers and build respective URIs
storage_uri = data['Storage']["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Loop through Members and their StorageControllers
# and gather properties from each StorageController
if data[u'Members']:
for storage_member in data[u'Members']:
storage_member_uri = storage_member[u'@odata.id']
response = self.get_request(self.root_uri + storage_member_uri)
data = response['data']
if key in data:
controller_list = data[key]
for controller in controller_list:
controller_result = {}
for property in properties:
if property in controller:
controller_result[property] = controller[property]
controller_results.append(controller_result)
result['entries'] = controller_results
return result
else:
return {'ret': False, 'msg': "Storage resource not found"}
def get_multi_storage_controller_inventory(self):
return self.aggregate_systems(self.get_storage_controller_inventory)
def get_disk_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
# Get these entries, but does not fail if not found
properties = ['BlockSizeBytes', 'CapableSpeedGbs', 'CapacityBytes',
'EncryptionAbility', 'EncryptionStatus',
'FailurePredicted', 'HotspareType', 'Id', 'Identifiers',
'Manufacturer', 'MediaType', 'Model', 'Name',
'PartNumber', 'PhysicalLocation', 'Protocol', 'Revision',
'RotationSpeedRPM', 'SerialNumber', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data[u'Members']:
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
if 'Drives' in data:
for device in data[u'Drives']:
disk_uri = self.root_uri + device[u'@odata.id']
response = self.get_request(disk_uri)
data = response['data']
drive_result = {}
for property in properties:
if property in data:
if data[property] is not None:
drive_result[property] = data[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
if 'SimpleStorage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data["SimpleStorage"]["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if 'Name' in data:
controller_name = data['Name']
else:
sc_id = data.get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
for device in data[u'Devices']:
drive_result = {}
for property in properties:
if property in device:
drive_result[property] = device[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
return result
def get_multi_disk_inventory(self):
return self.aggregate_systems(self.get_disk_inventory)
def get_volume_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
volume_list = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'RAIDType', 'VolumeType', 'BlockSizeBytes',
'Capacity', 'CapacityBytes', 'CapacitySources',
'Encrypted', 'EncryptionTypes', 'Identifiers',
'Operations', 'OptimumIOSizeBytes', 'AccessCapabilities',
'AllocatedPools', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data.get('Members'):
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
volume_results = []
if 'Volumes' in data:
# Get a list of all volumes and build respective URIs
volumes_uri = data[u'Volumes'][u'@odata.id']
response = self.get_request(self.root_uri + volumes_uri)
data = response['data']
if data.get('Members'):
for volume in data[u'Members']:
volume_list.append(volume[u'@odata.id'])
for v in volume_list:
uri = self.root_uri + v
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
volume_result = {}
for property in properties:
if property in data:
if data[property] is not None:
volume_result[property] = data[property]
# Get related Drives Id
drive_id_list = []
if 'Links' in data:
if 'Drives' in data[u'Links']:
for link in data[u'Links'][u'Drives']:
drive_id_link = link[u'@odata.id']
drive_id = drive_id_link.split("/")[-1]
drive_id_list.append({'Id': drive_id})
volume_result['Linked_drives'] = drive_id_list
volume_results.append(volume_result)
volumes = {'Controller': controller_name,
'Volumes': volume_results}
result["entries"].append(volumes)
else:
return {'ret': False, 'msg': "Storage resource not found"}
return result
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def restart_manager_gracefully(self):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#Manager.Reset"]["target"]
payload = {'ResetType': 'GracefulRestart'}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def manage_indicator_led(self, command):
result = {}
key = 'IndicatorLED'
payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'}
result = {}
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
if command in payloads.keys():
payload = {'IndicatorLED': payloads[command]}
response = self.patch_request(self.root_uri + chassis_uri, payload)
if response['ret'] is False:
return response
else:
return {'ret': False, 'msg': 'Invalid command'}
return result
def _map_reset_type(self, reset_type, allowable_values):
equiv_types = {
'On': 'ForceOn',
'ForceOn': 'On',
'ForceOff': 'GracefulShutdown',
'GracefulShutdown': 'ForceOff',
'GracefulRestart': 'ForceRestart',
'ForceRestart': 'GracefulRestart'
}
if reset_type in allowable_values:
return reset_type
if reset_type not in equiv_types:
return reset_type
mapped_type = equiv_types[reset_type]
if mapped_type in allowable_values:
return mapped_type
return reset_type
def manage_system_power(self, command):
key = "Actions"
reset_type_values = ['On', 'ForceOff', 'GracefulShutdown',
'GracefulRestart', 'ForceRestart', 'Nmi',
'ForceOn', 'PushPowerButton', 'PowerCycle']
# command should be PowerOn, PowerForceOff, etc.
if not command.startswith('Power'):
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
reset_type = command[5:]
# map Reboot to a ResetType that does a reboot
if reset_type == 'Reboot':
reset_type = 'GracefulRestart'
if reset_type not in reset_type_values:
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
# read the system resource and get the current power state
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
power_state = data.get('PowerState')
# if power is already in target state, nothing to do
if power_state == "On" and reset_type in ['On', 'ForceOn']:
return {'ret': True, 'changed': False}
if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']:
return {'ret': True, 'changed': False}
# get the #ComputerSystem.Reset Action and target URI
if key not in data or '#ComputerSystem.Reset' not in data[key]:
return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'}
reset_action = data[key]['#ComputerSystem.Reset']
if 'target' not in reset_action:
return {'ret': False,
'msg': 'target URI missing from Action #ComputerSystem.Reset'}
action_uri = reset_action['target']
# get AllowableValues from ActionInfo
allowable_values = None
if '@Redfish.ActionInfo' in reset_action:
action_info_uri = reset_action.get('@Redfish.ActionInfo')
response = self.get_request(self.root_uri + action_info_uri)
if response['ret'] is True:
data = response['data']
if 'Parameters' in data:
params = data['Parameters']
for param in params:
if param.get('Name') == 'ResetType':
allowable_values = param.get('AllowableValues')
break
# fallback to @Redfish.AllowableValues annotation
if allowable_values is None:
allowable_values = reset_action.get('[email protected]', [])
# map ResetType to an allowable value if needed
if reset_type not in allowable_values:
reset_type = self._map_reset_type(reset_type, allowable_values)
# define payload
payload = {'ResetType': reset_type}
# POST to Action URI
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def _find_account_uri(self, username=None, acct_id=None):
if not any((username, acct_id)):
return {'ret': False, 'msg':
'Must provide either account_id or account_username'}
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if username:
if username == data.get('UserName'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
if acct_id:
if acct_id == data.get('Id'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No account with the given account_id or account_username found'}
def _find_empty_account_slot(self):
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
if uris:
# first slot may be reserved, so move to end of list
uris += [uris.pop(0)]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if data.get('UserName') == "" and not data.get('Enabled', True):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No empty account slot found'}
def list_users(self):
result = {}
# listing all users has always been slower than other operations, why?
user_list = []
users_results = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'UserName', 'RoleId', 'Locked', 'Enabled']
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for users in data.get('Members', []):
user_list.append(users[u'@odata.id']) # user_list[] are URIs
# for each user, get details
for uri in user_list:
user = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
user[property] = data[property]
users_results.append(user)
result["entries"] = users_results
return result
def add_user_via_patch(self, user):
if user.get('account_id'):
# If Id slot specified, use it
response = self._find_account_uri(acct_id=user.get('account_id'))
else:
# Otherwise find first empty slot
response = self._find_empty_account_slot()
if not response['ret']:
return response
uri = response['uri']
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def add_user(self, user):
if not user.get('account_username'):
return {'ret': False, 'msg':
'Must provide account_username for AddUser command'}
response = self._find_account_uri(username=user.get('account_username'))
if response['ret']:
# account_username already exists, nothing to do
return {'ret': True, 'changed': False}
response = self.get_request(self.root_uri + self.accounts_uri)
if not response['ret']:
return response
headers = response['headers']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'POST' not in methods:
# if Allow header present and POST not listed, add via PATCH
return self.add_user_via_patch(user)
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.post_request(self.root_uri + self.accounts_uri, payload)
if not response['ret']:
if response.get('status') == 405:
# if POST returned a 405, try to add via PATCH
return self.add_user_via_patch(user)
else:
return response
return {'ret': True}
def enable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('Enabled', True):
# account already enabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': True}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user_via_patch(self, user, uri=None, data=None):
if not uri:
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data and data.get('UserName') == '' and not data.get('Enabled', False):
# account UserName already cleared, nothing to do
return {'ret': True, 'changed': False}
payload = {'UserName': ''}
if data.get('Enabled', False):
payload['Enabled'] = False
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
if response.get('no_match'):
# account does not exist, nothing to do
return {'ret': True, 'changed': False}
else:
# some error encountered
return response
uri = response['uri']
headers = response['headers']
data = response['data']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'DELETE' not in methods:
# if Allow header present and DELETE not listed, del via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
response = self.delete_request(self.root_uri + uri)
if not response['ret']:
if response.get('status') == 405:
# if DELETE returned a 405, try to delete via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
else:
return response
return {'ret': True}
def disable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if not data.get('Enabled'):
# account already disabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': False}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_role(self, user):
if not user.get('account_roleid'):
return {'ret': False, 'msg':
'Must provide account_roleid for UpdateUserRole command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('RoleId') == user.get('account_roleid'):
# account already has RoleId , nothing to do
return {'ret': True, 'changed': False}
payload = {'RoleId': user.get('account_roleid')}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_password(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'Password': user['account_password']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_name(self, user):
if not user.get('account_updatename'):
return {'ret': False, 'msg':
'Must provide account_updatename for UpdateUserName command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'UserName': user['account_updatename']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_accountservice_properties(self, user):
if user.get('account_properties') is None:
return {'ret': False, 'msg':
'Must provide account_properties for UpdateAccountServiceProperties command'}
account_properties = user.get('account_properties')
# Find AccountService
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
accountservice_uri = data["AccountService"]["@odata.id"]
# Check support or not
response = self.get_request(self.root_uri + accountservice_uri)
if response['ret'] is False:
return response
data = response['data']
for property_name in account_properties.keys():
if property_name not in data:
return {'ret': False, 'msg':
'property %s not supported' % property_name}
# if properties is already matched, nothing to do
need_change = False
for property_name in account_properties.keys():
if account_properties[property_name] != data[property_name]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "AccountService properties already set"}
payload = account_properties
response = self.patch_request(self.root_uri + accountservice_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified AccountService properties"}
def get_sessions(self):
result = {}
# listing all users has always been slower than other operations, why?
session_list = []
sessions_results = []
# Get these entries, but does not fail if not found
properties = ['Description', 'Id', 'Name', 'UserName']
response = self.get_request(self.root_uri + self.sessions_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for sessions in data[u'Members']:
session_list.append(sessions[u'@odata.id']) # session_list[] are URIs
# for each session, get details
for uri in session_list:
session = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
session[property] = data[property]
sessions_results.append(session)
result["entries"] = sessions_results
return result
def get_firmware_update_capabilities(self):
result = {}
response = self.get_request(self.root_uri + self.update_uri)
if response['ret'] is False:
return response
result['ret'] = True
result['entries'] = {}
data = response['data']
if "Actions" in data:
actions = data['Actions']
if len(actions) > 0:
for key in actions.keys():
action = actions.get(key)
if 'title' in action:
title = action['title']
else:
title = key
result['entries'][title] = action.get('[email protected]',
["Key [email protected] not found"])
else:
return {'ret': "False", 'msg': "Actions list is empty."}
else:
return {'ret': "False", 'msg': "Key Actions not found."}
return result
def _software_inventory(self, uri):
result = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
result['entries'] = []
for member in data[u'Members']:
uri = self.root_uri + member[u'@odata.id']
# Get details for each software or firmware member
response = self.get_request(uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
software = {}
# Get these standard properties if present
for key in ['Name', 'Id', 'Status', 'Version', 'Updateable',
'SoftwareId', 'LowestSupportedVersion', 'Manufacturer',
'ReleaseDate']:
if key in data:
software[key] = data.get(key)
result['entries'].append(software)
return result
def get_firmware_inventory(self):
if self.firmware_uri is None:
return {'ret': False, 'msg': 'No FirmwareInventory resource found'}
else:
return self._software_inventory(self.firmware_uri)
def get_software_inventory(self):
if self.software_uri is None:
return {'ret': False, 'msg': 'No SoftwareInventory resource found'}
else:
return self._software_inventory(self.software_uri)
def get_bios_attributes(self, systems_uri):
result = {}
bios_attributes = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for attribute in data[u'Attributes'].items():
bios_attributes[attribute[0]] = attribute[1]
result["entries"] = bios_attributes
return result
def get_multi_bios_attributes(self):
return self.aggregate_systems(self.get_bios_attributes)
def _get_boot_options_dict(self, boot):
# Get these entries from BootOption, if present
properties = ['DisplayName', 'BootOptionReference']
# Retrieve BootOptions if present
if 'BootOptions' in boot and '@odata.id' in boot['BootOptions']:
boot_options_uri = boot['BootOptions']["@odata.id"]
# Get BootOptions resource
response = self.get_request(self.root_uri + boot_options_uri)
if response['ret'] is False:
return {}
data = response['data']
# Retrieve Members array
if 'Members' not in data:
return {}
members = data['Members']
else:
members = []
# Build dict of BootOptions keyed by BootOptionReference
boot_options_dict = {}
for member in members:
if '@odata.id' not in member:
return {}
boot_option_uri = member['@odata.id']
response = self.get_request(self.root_uri + boot_option_uri)
if response['ret'] is False:
return {}
data = response['data']
if 'BootOptionReference' not in data:
return {}
boot_option_ref = data['BootOptionReference']
# fetch the props to display for this boot device
boot_props = {}
for prop in properties:
if prop in data:
boot_props[prop] = data[prop]
boot_options_dict[boot_option_ref] = boot_props
return boot_options_dict
def get_boot_order(self, systems_uri):
result = {}
# Retrieve System resource
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# Build boot device list
boot_device_list = []
for ref in boot_order:
boot_device_list.append(
boot_options_dict.get(ref, {'BootOptionReference': ref}))
result["entries"] = boot_device_list
return result
def get_multi_boot_order(self):
return self.aggregate_systems(self.get_boot_order)
def get_boot_override(self, systems_uri):
result = {}
properties = ["BootSourceOverrideEnabled", "BootSourceOverrideTarget",
"BootSourceOverrideMode", "UefiTargetBootSourceOverride", "[email protected]"]
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Boot' not in data:
return {'ret': False, 'msg': "Key Boot not found"}
boot = data['Boot']
boot_overrides = {}
if "BootSourceOverrideEnabled" in boot:
if boot["BootSourceOverrideEnabled"] is not False:
for property in properties:
if property in boot:
if boot[property] is not None:
boot_overrides[property] = boot[property]
else:
return {'ret': False, 'msg': "No boot override is enabled."}
result['entries'] = boot_overrides
return result
def get_multi_boot_override(self):
return self.aggregate_systems(self.get_boot_override)
def set_bios_default_settings(self):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"]
response = self.post_request(self.root_uri + reset_bios_settings_uri, {})
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"}
def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next):
result = {}
key = "Boot"
if not bootdevice:
return {'ret': False,
'msg': "bootdevice option required for SetOneTimeBoot"}
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
boot = data[key]
annotation = '[email protected]'
if annotation in boot:
allowable_values = boot[annotation]
if isinstance(allowable_values, list) and bootdevice not in allowable_values:
return {'ret': False,
'msg': "Boot device %s not in list of allowable values (%s)" %
(bootdevice, allowable_values)}
# read existing values
enabled = boot.get('BootSourceOverrideEnabled')
target = boot.get('BootSourceOverrideTarget')
cur_uefi_target = boot.get('UefiTargetBootSourceOverride')
cur_boot_next = boot.get('BootNext')
if bootdevice == 'UefiTarget':
if not uefi_target:
return {'ret': False,
'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"}
if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'UefiTargetBootSourceOverride': uefi_target
}
}
elif bootdevice == 'UefiBootNext':
if not boot_next:
return {'ret': False,
'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"}
if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'BootNext': boot_next
}
}
else:
if enabled == 'Once' and target == bootdevice:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice
}
}
response = self.patch_request(self.root_uri + self.systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def set_bios_attributes(self, attributes):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Make a copy of the attributes dict
attrs_to_patch = dict(attributes)
# Check the attributes
for attr in attributes:
if attr not in data[u'Attributes']:
return {'ret': False, 'msg': "BIOS attribute %s not found" % attr}
# If already set to requested value, remove it from PATCH payload
if data[u'Attributes'][attr] == attributes[attr]:
del attrs_to_patch[attr]
# Return success w/ changed=False if no attrs need to be changed
if not attrs_to_patch:
return {'ret': True, 'changed': False,
'msg': "BIOS attributes already set"}
# Get the SettingsObject URI
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
# Construct payload and issue PATCH command
payload = {"Attributes": attrs_to_patch}
response = self.patch_request(self.root_uri + set_bios_attr_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"}
def set_boot_order(self, boot_list):
if not boot_list:
return {'ret': False,
'msg': "boot_order list required for SetBootOrder command"}
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# validate boot_list against BootOptionReferences if available
if boot_options_dict:
boot_option_references = boot_options_dict.keys()
for ref in boot_list:
if ref not in boot_option_references:
return {'ret': False,
'msg': "BootOptionReference %s not found in BootOptions" % ref}
# If requested BootOrder is already set, nothing to do
if boot_order == boot_list:
return {'ret': True, 'changed': False,
'msg': "BootOrder already set to %s" % boot_list}
payload = {
'Boot': {
'BootOrder': boot_list
}
}
response = self.patch_request(self.root_uri + systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "BootOrder set"}
def set_default_boot_order(self):
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# get the #ComputerSystem.SetDefaultBootOrder Action and target URI
action = '#ComputerSystem.SetDefaultBootOrder'
if 'Actions' not in data or action not in data['Actions']:
return {'ret': False, 'msg': 'Action %s not found' % action}
if 'target' not in data['Actions'][action]:
return {'ret': False,
'msg': 'target URI missing from Action %s' % action}
action_uri = data['Actions'][action]['target']
# POST to Action URI
payload = {}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "BootOrder set to default"}
def get_chassis_inventory(self):
result = {}
chassis_results = []
# Get these entries, but does not fail if not found
properties = ['ChassisType', 'PartNumber', 'AssetTag',
'Manufacturer', 'IndicatorLED', 'SerialNumber', 'Model']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
chassis_result = {}
for property in properties:
if property in data:
chassis_result[property] = data[property]
chassis_results.append(chassis_result)
result["entries"] = chassis_results
return result
def get_fan_inventory(self):
result = {}
fan_results = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['FanName', 'Reading', 'ReadingUnits', 'Status']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
# match: found an entry for "Thermal" information = fans
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for device in data[u'Fans']:
fan = {}
for property in properties:
if property in device:
fan[property] = device[property]
fan_results.append(fan)
result["entries"] = fan_results
return result
def get_chassis_power(self):
result = {}
key = "Power"
# Get these entries, but does not fail if not found
properties = ['Name', 'PowerAllocatedWatts',
'PowerAvailableWatts', 'PowerCapacityWatts',
'PowerConsumedWatts', 'PowerMetrics',
'PowerRequestedWatts', 'RelatedItem', 'Status']
chassis_power_results = []
# Go through list
for chassis_uri in self.chassis_uris:
chassis_power_result = {}
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
response = self.get_request(self.root_uri + data[key]['@odata.id'])
data = response['data']
if 'PowerControl' in data:
if len(data['PowerControl']) > 0:
data = data['PowerControl'][0]
for property in properties:
if property in data:
chassis_power_result[property] = data[property]
else:
return {'ret': False, 'msg': 'Key PowerControl not found.'}
chassis_power_results.append(chassis_power_result)
else:
return {'ret': False, 'msg': 'Key Power not found.'}
result['entries'] = chassis_power_results
return result
def get_chassis_thermals(self):
result = {}
sensors = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['Name', 'PhysicalContext', 'UpperThresholdCritical',
'UpperThresholdFatal', 'UpperThresholdNonCritical',
'LowerThresholdCritical', 'LowerThresholdFatal',
'LowerThresholdNonCritical', 'MaxReadingRangeTemp',
'MinReadingRangeTemp', 'ReadingCelsius', 'RelatedItem',
'SensorNumber']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if "Temperatures" in data:
for sensor in data[u'Temperatures']:
sensor_result = {}
for property in properties:
if property in sensor:
if sensor[property] is not None:
sensor_result[property] = sensor[property]
sensors.append(sensor_result)
if sensors is None:
return {'ret': False, 'msg': 'Key Temperatures was not found.'}
result['entries'] = sensors
return result
def get_cpu_inventory(self, systems_uri):
result = {}
cpu_list = []
cpu_results = []
key = "Processors"
# Get these entries, but does not fail if not found
properties = ['Id', 'Manufacturer', 'Model', 'MaxSpeedMHz', 'TotalCores',
'TotalThreads', 'Status']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
processors_uri = data[key]["@odata.id"]
# Get a list of all CPUs and build respective URIs
response = self.get_request(self.root_uri + processors_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for cpu in data[u'Members']:
cpu_list.append(cpu[u'@odata.id'])
for c in cpu_list:
cpu = {}
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
cpu[property] = data[property]
cpu_results.append(cpu)
result["entries"] = cpu_results
return result
def get_multi_cpu_inventory(self):
return self.aggregate_systems(self.get_cpu_inventory)
def get_memory_inventory(self, systems_uri):
result = {}
memory_list = []
memory_results = []
key = "Memory"
# Get these entries, but does not fail if not found
properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber',
'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
memory_uri = data[key]["@odata.id"]
# Get a list of all DIMMs and build respective URIs
response = self.get_request(self.root_uri + memory_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for dimm in data[u'Members']:
memory_list.append(dimm[u'@odata.id'])
for m in memory_list:
dimm = {}
uri = self.root_uri + m
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if "Status" in data:
if "State" in data["Status"]:
if data["Status"]["State"] == "Absent":
continue
else:
continue
for property in properties:
if property in data:
dimm[property] = data[property]
memory_results.append(dimm)
result["entries"] = memory_results
return result
def get_multi_memory_inventory(self):
return self.aggregate_systems(self.get_memory_inventory)
def get_nic_inventory(self, resource_uri):
result = {}
nic_list = []
nic_results = []
key = "EthernetInterfaces"
# Get these entries, but does not fail if not found
properties = ['Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses',
'NameServers', 'MACAddress', 'PermanentMACAddress',
'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
ethernetinterfaces_uri = data[key]["@odata.id"]
# Get a list of all network controllers and build respective URIs
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for nic in data[u'Members']:
nic_list.append(nic[u'@odata.id'])
for n in nic_list:
nic = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
nic[property] = data[property]
nic_results.append(nic)
result["entries"] = nic_results
return result
def get_multi_nic_inventory(self, resource_type):
ret = True
entries = []
# Given resource_type, use the proper URI
if resource_type == 'Systems':
resource_uris = self.systems_uris
elif resource_type == 'Manager':
resource_uris = self.manager_uris
for resource_uri in resource_uris:
inventory = self.get_nic_inventory(resource_uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({'resource_uri': resource_uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def get_virtualmedia(self, resource_uri):
result = {}
virtualmedia_list = []
virtualmedia_results = []
key = "VirtualMedia"
# Get these entries, but does not fail if not found
properties = ['Description', 'ConnectedVia', 'Id', 'MediaTypes',
'Image', 'ImageName', 'Name', 'WriteProtected',
'TransferMethod', 'TransferProtocolType']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
virtualmedia_uri = data[key]["@odata.id"]
# Get a list of all virtual media and build respective URIs
response = self.get_request(self.root_uri + virtualmedia_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for virtualmedia in data[u'Members']:
virtualmedia_list.append(virtualmedia[u'@odata.id'])
for n in virtualmedia_list:
virtualmedia = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
virtualmedia[property] = data[property]
virtualmedia_results.append(virtualmedia)
result["entries"] = virtualmedia_results
return result
def get_multi_virtualmedia(self):
ret = True
entries = []
resource_uris = self.manager_uris
for resource_uri in resource_uris:
virtualmedia = self.get_virtualmedia(resource_uri)
ret = virtualmedia.pop('ret') and ret
if 'entries' in virtualmedia:
entries.append(({'resource_uri': resource_uri},
virtualmedia['entries']))
return dict(ret=ret, entries=entries)
def get_psu_inventory(self):
result = {}
psu_list = []
psu_results = []
key = "PowerSupplies"
# Get these entries, but does not fail if not found
properties = ['Name', 'Model', 'SerialNumber', 'PartNumber', 'Manufacturer',
'FirmwareVersion', 'PowerCapacityWatts', 'PowerSupplyType',
'Status']
# Get a list of all Chassis and build URIs, then get all PowerSupplies
# from each Power entry in the Chassis
chassis_uri_list = self.chassis_uris
for chassis_uri in chassis_uri_list:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Power' in data:
power_uri = data[u'Power'][u'@odata.id']
else:
continue
response = self.get_request(self.root_uri + power_uri)
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
psu_list = data[key]
for psu in psu_list:
psu_not_present = False
psu_data = {}
for property in properties:
if property in psu:
if psu[property] is not None:
if property == 'Status':
if 'State' in psu[property]:
if psu[property]['State'] == 'Absent':
psu_not_present = True
psu_data[property] = psu[property]
if psu_not_present:
continue
psu_results.append(psu_data)
result["entries"] = psu_results
if not result["entries"]:
return {'ret': False, 'msg': "No PowerSupply objects found"}
return result
def get_multi_psu_inventory(self):
return self.aggregate_systems(self.get_psu_inventory)
def get_system_inventory(self, systems_uri):
result = {}
inventory = {}
# Get these entries, but does not fail if not found
properties = ['Status', 'HostName', 'PowerState', 'Model', 'Manufacturer',
'PartNumber', 'SystemType', 'AssetTag', 'ServiceTag',
'SerialNumber', 'SKU', 'BiosVersion', 'MemorySummary',
'ProcessorSummary', 'TrustedModules']
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for property in properties:
if property in data:
inventory[property] = data[property]
result["entries"] = inventory
return result
def get_multi_system_inventory(self):
return self.aggregate_systems(self.get_system_inventory)
def get_network_protocols(self):
result = {}
service_result = {}
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
for protocol_service in protocol_services:
if protocol_service in data.keys():
service_result[protocol_service] = data[protocol_service]
result['ret'] = True
result["entries"] = service_result
return result
def set_network_protocols(self, manager_services):
# Check input data validity
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
protocol_state_onlist = ['true', 'True', True, 'on', 1]
protocol_state_offlist = ['false', 'False', False, 'off', 0]
payload = {}
for service_name in manager_services.keys():
if service_name not in protocol_services:
return {'ret': False, 'msg': "Service name %s is invalid" % service_name}
payload[service_name] = {}
for service_property in manager_services[service_name].keys():
value = manager_services[service_name][service_property]
if service_property in ['ProtocolEnabled', 'protocolenabled']:
if value in protocol_state_onlist:
payload[service_name]['ProtocolEnabled'] = True
elif value in protocol_state_offlist:
payload[service_name]['ProtocolEnabled'] = False
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
elif service_property in ['port', 'Port']:
if isinstance(value, int):
payload[service_name]['Port'] = value
elif isinstance(value, str) and value.isdigit():
payload[service_name]['Port'] = int(value)
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
else:
payload[service_name][service_property] = value
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
# Check service property support or not
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
for service_name in payload.keys():
if service_name not in data:
return {'ret': False, 'msg': "%s service not supported" % service_name}
for service_property in payload[service_name].keys():
if service_property not in data[service_name]:
return {'ret': False, 'msg': "%s property for %s service not supported" % (service_property, service_name)}
# if the protocol is already set, nothing to do
need_change = False
for service_name in payload.keys():
for service_property in payload[service_name].keys():
value = payload[service_name][service_property]
if value != data[service_name][service_property]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NetworkProtocol services already set"}
response = self.patch_request(self.root_uri + networkprotocol_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NetworkProtocol services"}
@staticmethod
def to_singular(resource_name):
if resource_name.endswith('ies'):
resource_name = resource_name[:-3] + 'y'
elif resource_name.endswith('s'):
resource_name = resource_name[:-1]
return resource_name
def get_health_resource(self, subsystem, uri, health, expanded):
status = 'Status'
if expanded:
d = expanded
else:
r = self.get_request(self.root_uri + uri)
if r.get('ret'):
d = r.get('data')
else:
return
if 'Members' in d: # collections case
for m in d.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
p = r.get('data')
if p:
e = {self.to_singular(subsystem.lower()) + '_uri': u,
status: p.get(status,
"Status not available")}
health[subsystem].append(e)
else: # non-collections case
e = {self.to_singular(subsystem.lower()) + '_uri': uri,
status: d.get(status,
"Status not available")}
health[subsystem].append(e)
def get_health_subsystem(self, subsystem, data, health):
if subsystem in data:
sub = data.get(subsystem)
if isinstance(sub, list):
for r in sub:
if '@odata.id' in r:
uri = r.get('@odata.id')
expanded = None
if '#' in uri and len(r) > 1:
expanded = r
self.get_health_resource(subsystem, uri, health, expanded)
elif isinstance(sub, dict):
if '@odata.id' in sub:
uri = sub.get('@odata.id')
self.get_health_resource(subsystem, uri, health, None)
elif 'Members' in data:
for m in data.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
d = r.get('data')
self.get_health_subsystem(subsystem, d, health)
def get_health_report(self, category, uri, subsystems):
result = {}
health = {}
status = 'Status'
# Get health status of top level resource
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
health[category] = {status: data.get(status, "Status not available")}
# Get health status of subsystems
for sub in subsystems:
d = None
if sub.startswith('Links.'): # ex: Links.PCIeDevices
sub = sub[len('Links.'):]
d = data.get('Links', {})
elif '.' in sub: # ex: Thermal.Fans
p, sub = sub.split('.')
u = data.get(p, {}).get('@odata.id')
if u:
r = self.get_request(self.root_uri + u)
if r['ret']:
d = r['data']
if not d:
continue
else: # ex: Memory
d = data
health[sub] = []
self.get_health_subsystem(sub, d, health)
if not health[sub]:
del health[sub]
result["entries"] = health
return result
def get_system_health_report(self, systems_uri):
subsystems = ['Processors', 'Memory', 'SimpleStorage', 'Storage',
'EthernetInterfaces', 'NetworkInterfaces.NetworkPorts',
'NetworkInterfaces.NetworkDeviceFunctions']
return self.get_health_report('System', systems_uri, subsystems)
def get_multi_system_health_report(self):
return self.aggregate_systems(self.get_system_health_report)
def get_chassis_health_report(self, chassis_uri):
subsystems = ['Power.PowerSupplies', 'Thermal.Fans',
'Links.PCIeDevices']
return self.get_health_report('Chassis', chassis_uri, subsystems)
def get_multi_chassis_health_report(self):
return self.aggregate_chassis(self.get_chassis_health_report)
def get_manager_health_report(self, manager_uri):
subsystems = []
return self.get_health_report('Manager', manager_uri, subsystems)
def get_multi_manager_health_report(self):
return self.aggregate_managers(self.get_manager_health_report)
def set_manager_nic(self, nic_addr, nic_config):
# Get EthernetInterface collection
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'EthernetInterfaces' not in data:
return {'ret': False, 'msg': "EthernetInterfaces resource not found"}
ethernetinterfaces_uri = data["EthernetInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
# Find target EthernetInterface
target_ethernet_uri = None
target_ethernet_current_setting = None
if nic_addr == 'null':
# Find root_uri matched EthernetInterface when nic_addr is not specified
nic_addr = (self.root_uri).split('/')[-1]
nic_addr = nic_addr.split(':')[0] # split port if existing
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if '"' + nic_addr + '"' in str(data) or "'" + nic_addr + "'" in str(data):
target_ethernet_uri = uri
target_ethernet_current_setting = data
break
if target_ethernet_uri is None:
return {'ret': False, 'msg': "No matched EthernetInterface found under Manager"}
# Convert input to payload and check validity
payload = {}
for property in nic_config.keys():
value = nic_config[property]
if property not in target_ethernet_current_setting:
return {'ret': False, 'msg': "Property %s in nic_config is invalid" % property}
if isinstance(value, dict):
if isinstance(target_ethernet_current_setting[property], dict):
payload[property] = value
elif isinstance(target_ethernet_current_setting[property], list):
payload[property] = list()
payload[property].append(value)
else:
return {'ret': False, 'msg': "Value of property %s in nic_config is invalid" % property}
else:
payload[property] = value
# If no need change, nothing to do. If error detected, report it
need_change = False
for property in payload.keys():
set_value = payload[property]
cur_value = target_ethernet_current_setting[property]
# type is simple(not dict/list)
if not isinstance(set_value, dict) and not isinstance(set_value, list):
if set_value != cur_value:
need_change = True
# type is dict
if isinstance(set_value, dict):
for subprop in payload[property].keys():
if subprop not in target_ethernet_current_setting[property]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][subprop]
sub_cur_value = target_ethernet_current_setting[property][subprop]
if sub_set_value != sub_cur_value:
need_change = True
# type is list
if isinstance(set_value, list):
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in target_ethernet_current_setting[property][i]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][i][subprop]
sub_cur_value = target_ethernet_current_setting[property][i][subprop]
if sub_set_value != sub_cur_value:
need_change = True
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NIC already set"}
response = self.patch_request(self.root_uri + target_ethernet_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NIC"}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,179 |
ansible_command|connect_timeout vs httpapi timeout
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
httpapi timeout is hardcoded with 120 seconds in the DOCUMENTATION area of plugins/connection/httpapi.py and cannot be set dynamically to higher values. Setting ansible_command_timeout and ansible_connect_timeout to values higher than 120 still raise a ansible.module_utils.connection.ConnectionError: timed out error after 120 seconds
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
httpapi.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /rnx/config/ansible/xlab/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
root@localhost:/rnx/config/ansible/xlab# ansible-config dump --only-changed
CACHE_PLUGIN(/rnx/config/ansible/xlab/ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/rnx/config/ansible/xlab/ansible.cfg) = <xxxx>:6379:5
CACHE_PLUGIN_PREFIX(/rnx/config/ansible/xlab/ansible.cfg) = ansible_facts_
PERSISTENT_COMMAND_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 500
PERSISTENT_CONNECT_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 600
ansible_connection: httpapi
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Software
BIOS: version 05.33
NXOS: version 9.3(1)
BIOS compile time: 09/08/2018
NXOS image file is: bootflash:///nxos.9.3.1.bin
NXOS compile time: 7/18/2019 15:00:00 [07/19/2019 00:04:48]
Hardware
cisco Nexus9000 C93180YC-FX Chassis
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
date; ansible-playbook -vvvv -i hosts.yml <example-playbook-below.yml>; date
If you use 'sleep 115' it works.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts:
mynxosdevice
check_mode: no
gather_facts: no
tasks:
- nxos_command:
commands:
- sleep 125
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ok: [mynxosdevice] => {
"changed": false,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": null,
"commands": [
"sleep 125"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"transport": null,
"use_ssl": null,
"username": null,
"validate_certs": null,
"wait_for": null
}
},
"stdout": [
{}
],
"stdout_lines": [
{}
]
}
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook runs in timeout.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [nxos_command] ************************************************************************************************************
task path: /rnx/config/ansible/xlab/issues/pb_httpapi_timeout.yml:8
<192.168.0.222> attempting to start connection
<192.168.0.222> using connection plugin httpapi
<192.168.0.222> found existing local domain socket, using it!
<192.168.0.222> updating play_context for connection
<192.168.0.222>
<192.168.0.222> local domain socket path is /root/.ansible/pc/c88162470b
<192.168.0.222> PERSISTENT_COMMAND_TIMEOUT is 500
<192.168.0.222> PERSISTENT_CONNECT_TIMEOUT is 600
<192.168.0.222> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.222> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" && echo ansible-tmp-1578073462.6235766-170009103658461="` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/network/nxos/nxos_command.py
<192.168.0.222> PUT /root/.ansible/tmp/ansible-local-26623ludka1vp/tmpbfo43_rp TO /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py
<192.168.0.222> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'python3 /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 223, in <module>
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 191, in main
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 1223, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 565, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py", line 185, in __rpc__
ansible.module_utils.connection.ConnectionError: timed out
fatal: [mynxosdevice]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 223, in <module>\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 191, in main\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 1223, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 565, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py\", line 185, in __rpc__\nansible.module_utils.connection.ConnectionError: timed out\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/66179
|
https://github.com/ansible/ansible/pull/66267
|
f695f401333a4875277aa85f8ebeeb2edb28cd1d
|
0a3a81bd12a1840caa7d1f3f7e1e77fdf03b4bcc
| 2020-01-03T18:00:54Z |
python
| 2020-01-11T04:09:21Z |
changelogs/fragments/66267-httpapi-timeout.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,179 |
ansible_command|connect_timeout vs httpapi timeout
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
httpapi timeout is hardcoded with 120 seconds in the DOCUMENTATION area of plugins/connection/httpapi.py and cannot be set dynamically to higher values. Setting ansible_command_timeout and ansible_connect_timeout to values higher than 120 still raise a ansible.module_utils.connection.ConnectionError: timed out error after 120 seconds
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
httpapi.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /rnx/config/ansible/xlab/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
root@localhost:/rnx/config/ansible/xlab# ansible-config dump --only-changed
CACHE_PLUGIN(/rnx/config/ansible/xlab/ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/rnx/config/ansible/xlab/ansible.cfg) = <xxxx>:6379:5
CACHE_PLUGIN_PREFIX(/rnx/config/ansible/xlab/ansible.cfg) = ansible_facts_
PERSISTENT_COMMAND_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 500
PERSISTENT_CONNECT_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 600
ansible_connection: httpapi
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Software
BIOS: version 05.33
NXOS: version 9.3(1)
BIOS compile time: 09/08/2018
NXOS image file is: bootflash:///nxos.9.3.1.bin
NXOS compile time: 7/18/2019 15:00:00 [07/19/2019 00:04:48]
Hardware
cisco Nexus9000 C93180YC-FX Chassis
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
date; ansible-playbook -vvvv -i hosts.yml <example-playbook-below.yml>; date
If you use 'sleep 115' it works.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts:
mynxosdevice
check_mode: no
gather_facts: no
tasks:
- nxos_command:
commands:
- sleep 125
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ok: [mynxosdevice] => {
"changed": false,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": null,
"commands": [
"sleep 125"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"transport": null,
"use_ssl": null,
"username": null,
"validate_certs": null,
"wait_for": null
}
},
"stdout": [
{}
],
"stdout_lines": [
{}
]
}
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook runs in timeout.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [nxos_command] ************************************************************************************************************
task path: /rnx/config/ansible/xlab/issues/pb_httpapi_timeout.yml:8
<192.168.0.222> attempting to start connection
<192.168.0.222> using connection plugin httpapi
<192.168.0.222> found existing local domain socket, using it!
<192.168.0.222> updating play_context for connection
<192.168.0.222>
<192.168.0.222> local domain socket path is /root/.ansible/pc/c88162470b
<192.168.0.222> PERSISTENT_COMMAND_TIMEOUT is 500
<192.168.0.222> PERSISTENT_CONNECT_TIMEOUT is 600
<192.168.0.222> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.222> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" && echo ansible-tmp-1578073462.6235766-170009103658461="` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/network/nxos/nxos_command.py
<192.168.0.222> PUT /root/.ansible/tmp/ansible-local-26623ludka1vp/tmpbfo43_rp TO /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py
<192.168.0.222> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'python3 /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 223, in <module>
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 191, in main
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 1223, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 565, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py", line 185, in __rpc__
ansible.module_utils.connection.ConnectionError: timed out
fatal: [mynxosdevice]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 223, in <module>\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 191, in main\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 1223, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 565, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py\", line 185, in __rpc__\nansible.module_utils.connection.ConnectionError: timed out\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/66179
|
https://github.com/ansible/ansible/pull/66267
|
f695f401333a4875277aa85f8ebeeb2edb28cd1d
|
0a3a81bd12a1840caa7d1f3f7e1e77fdf03b4bcc
| 2020-01-03T18:00:54Z |
python
| 2020-01-11T04:09:21Z |
lib/ansible/plugins/connection/httpapi.py
|
# (c) 2018 Red Hat Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
---
author: Ansible Networking Team
connection: httpapi
short_description: Use httpapi to run command on network appliances
description:
- This connection plugin provides a connection to remote devices over a
HTTP(S)-based api.
version_added: "2.6"
options:
host:
description:
- Specifies the remote device FQDN or IP address to establish the HTTP(S)
connection to.
default: inventory_hostname
vars:
- name: ansible_host
port:
type: int
description:
- Specifies the port on the remote device that listens for connections
when establishing the HTTP(S) connection.
- When unspecified, will pick 80 or 443 based on the value of use_ssl.
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_httpapi_port
network_os:
description:
- Configures the device platform network operating system. This value is
used to load the correct httpapi plugin to communicate with the remote
device
vars:
- name: ansible_network_os
remote_user:
description:
- The username used to authenticate to the remote device when the API
connection is first established. If the remote_user is not specified,
the connection will use the username of the logged in user.
- Can be configured from the CLI via the C(--user) or C(-u) options.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
password:
description:
- Configures the user password used to authenticate to the remote device
when needed for the device API.
vars:
- name: ansible_password
- name: ansible_httpapi_pass
- name: ansible_httpapi_password
use_ssl:
type: boolean
description:
- Whether to connect using SSL (HTTPS) or not (HTTP).
default: False
vars:
- name: ansible_httpapi_use_ssl
validate_certs:
type: boolean
version_added: '2.7'
description:
- Whether to validate SSL certificates
default: True
vars:
- name: ansible_httpapi_validate_certs
use_proxy:
type: boolean
version_added: "2.9"
description:
- Whether to use https_proxy for requests.
default: True
vars:
- name: ansible_httpapi_use_proxy
timeout:
type: int
description:
- Sets the connection time, in seconds, for communicating with the
remote device. This timeout is used as the default timeout value for
commands when issuing a command to the network CLI. If the command
does not return in timeout seconds, an error is generated.
default: 120
become:
type: boolean
description:
- The become option will instruct the CLI session to attempt privilege
escalation on platforms that support it. Normally this means
transitioning from user mode to C(enable) mode in the CLI session.
If become is set to True and the remote device does not support
privilege escalation or the privilege has already been elevated, then
this option is silently ignored.
- Can be configured from the CLI via the C(--become) or C(-b) options.
default: False
ini:
- section: privilege_escalation
key: become
env:
- name: ANSIBLE_BECOME
vars:
- name: ansible_become
become_method:
description:
- This option allows the become method to be specified in for handling
privilege escalation. Typically the become_method value is set to
C(enable) but could be defined as other values.
default: sudo
ini:
- section: privilege_escalation
key: become_method
env:
- name: ANSIBLE_BECOME_METHOD
vars:
- name: ansible_become_method
persistent_connect_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait when trying to
initially establish a persistent connection. If this value expires
before the connection to the remote device is completed, the connection
will fail.
default: 30
ini:
- section: persistent_connection
key: connect_timeout
env:
- name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT
vars:
- name: ansible_connect_timeout
persistent_command_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait for a command to
return from the remote device. If this timer is exceeded before the
command returns, the connection plugin will raise an exception and
close.
default: 30
ini:
- section: persistent_connection
key: command_timeout
env:
- name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT
vars:
- name: ansible_command_timeout
persistent_log_messages:
type: boolean
description:
- This flag will enable logging the command executed and response received from
target device in the ansible log file. For this option to work 'log_path' ansible
configuration option is required to be set to a file path with write access.
- Be sure to fully understand the security implications of enabling this
option as it could create a security vulnerability by logging sensitive information in log file.
default: False
ini:
- section: persistent_connection
key: log_messages
env:
- name: ANSIBLE_PERSISTENT_LOG_MESSAGES
vars:
- name: ansible_persistent_log_messages
"""
from io import BytesIO
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_bytes
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves import cPickle
from ansible.module_utils.six.moves.urllib.error import HTTPError, URLError
from ansible.module_utils.urls import open_url
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import httpapi_loader
from ansible.plugins.connection import NetworkConnectionBase, ensure_connect
class Connection(NetworkConnectionBase):
'''Network API connection'''
transport = 'httpapi'
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._url = None
self._auth = None
if self._network_os:
self.httpapi = httpapi_loader.get(self._network_os, self)
if self.httpapi:
self._sub_plugin = {'type': 'httpapi', 'name': self.httpapi._load_name, 'obj': self.httpapi}
self.queue_message('vvvv', 'loaded API plugin %s from path %s for network_os %s' %
(self.httpapi._load_name, self.httpapi._original_path, self._network_os))
else:
raise AnsibleConnectionFailure('unable to load API plugin for network_os %s' % self._network_os)
else:
raise AnsibleConnectionFailure(
'Unable to automatically determine host network os. Please '
'manually configure ansible_network_os value for this host'
)
self.queue_message('log', 'network_os is set to %s' % self._network_os)
def update_play_context(self, pc_data):
"""Updates the play context information for the connection"""
pc_data = to_bytes(pc_data)
if PY3:
pc_data = cPickle.loads(pc_data, encoding='bytes')
else:
pc_data = cPickle.loads(pc_data)
play_context = PlayContext()
play_context.deserialize(pc_data)
self.queue_message('vvvv', 'updating play_context for connection')
if self._play_context.become ^ play_context.become:
self.set_become(play_context)
if play_context.become is True:
self.queue_message('vvvv', 'authorizing connection')
else:
self.queue_message('vvvv', 'deauthorizing connection')
self._play_context = play_context
def _connect(self):
if not self.connected:
protocol = 'https' if self.get_option('use_ssl') else 'http'
host = self.get_option('host')
port = self.get_option('port') or (443 if protocol == 'https' else 80)
self._url = '%s://%s:%s' % (protocol, host, port)
self.queue_message('vvv', "ESTABLISH HTTP(S) CONNECTFOR USER: %s TO %s" %
(self._play_context.remote_user, self._url))
self.httpapi.set_become(self._play_context)
self._connected = True
self.httpapi.login(self.get_option('remote_user'), self.get_option('password'))
def close(self):
'''
Close the active session to the device
'''
# only close the connection if its connected.
if self._connected:
self.queue_message('vvvv', "closing http(s) connection to device")
self.logout()
super(Connection, self).close()
@ensure_connect
def send(self, path, data, **kwargs):
'''
Sends the command to the device over api
'''
url_kwargs = dict(
timeout=self.get_option('timeout'), validate_certs=self.get_option('validate_certs'),
use_proxy=self.get_option("use_proxy"),
headers={},
)
url_kwargs.update(kwargs)
if self._auth:
# Avoid modifying passed-in headers
headers = dict(kwargs.get('headers', {}))
headers.update(self._auth)
url_kwargs['headers'] = headers
else:
url_kwargs['force_basic_auth'] = True
url_kwargs['url_username'] = self.get_option('remote_user')
url_kwargs['url_password'] = self.get_option('password')
try:
url = self._url + path
self._log_messages("send url '%s' with data '%s' and kwargs '%s'" % (url, data, url_kwargs))
response = open_url(url, data=data, **url_kwargs)
except HTTPError as exc:
is_handled = self.handle_httperror(exc)
if is_handled is True:
return self.send(path, data, **kwargs)
elif is_handled is False:
raise
else:
response = is_handled
except URLError as exc:
raise AnsibleConnectionFailure('Could not connect to {0}: {1}'.format(self._url + path, exc.reason))
response_buffer = BytesIO()
resp_data = response.read()
self._log_messages("received response: '%s'" % resp_data)
response_buffer.write(resp_data)
# Try to assign a new auth token if one is given
self._auth = self.update_auth(response, response_buffer) or self._auth
response_buffer.seek(0)
return response, response_buffer
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,179 |
ansible_command|connect_timeout vs httpapi timeout
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
httpapi timeout is hardcoded with 120 seconds in the DOCUMENTATION area of plugins/connection/httpapi.py and cannot be set dynamically to higher values. Setting ansible_command_timeout and ansible_connect_timeout to values higher than 120 still raise a ansible.module_utils.connection.ConnectionError: timed out error after 120 seconds
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
httpapi.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /rnx/config/ansible/xlab/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
root@localhost:/rnx/config/ansible/xlab# ansible-config dump --only-changed
CACHE_PLUGIN(/rnx/config/ansible/xlab/ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/rnx/config/ansible/xlab/ansible.cfg) = <xxxx>:6379:5
CACHE_PLUGIN_PREFIX(/rnx/config/ansible/xlab/ansible.cfg) = ansible_facts_
PERSISTENT_COMMAND_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 500
PERSISTENT_CONNECT_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 600
ansible_connection: httpapi
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Software
BIOS: version 05.33
NXOS: version 9.3(1)
BIOS compile time: 09/08/2018
NXOS image file is: bootflash:///nxos.9.3.1.bin
NXOS compile time: 7/18/2019 15:00:00 [07/19/2019 00:04:48]
Hardware
cisco Nexus9000 C93180YC-FX Chassis
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
date; ansible-playbook -vvvv -i hosts.yml <example-playbook-below.yml>; date
If you use 'sleep 115' it works.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts:
mynxosdevice
check_mode: no
gather_facts: no
tasks:
- nxos_command:
commands:
- sleep 125
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ok: [mynxosdevice] => {
"changed": false,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": null,
"commands": [
"sleep 125"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"transport": null,
"use_ssl": null,
"username": null,
"validate_certs": null,
"wait_for": null
}
},
"stdout": [
{}
],
"stdout_lines": [
{}
]
}
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook runs in timeout.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [nxos_command] ************************************************************************************************************
task path: /rnx/config/ansible/xlab/issues/pb_httpapi_timeout.yml:8
<192.168.0.222> attempting to start connection
<192.168.0.222> using connection plugin httpapi
<192.168.0.222> found existing local domain socket, using it!
<192.168.0.222> updating play_context for connection
<192.168.0.222>
<192.168.0.222> local domain socket path is /root/.ansible/pc/c88162470b
<192.168.0.222> PERSISTENT_COMMAND_TIMEOUT is 500
<192.168.0.222> PERSISTENT_CONNECT_TIMEOUT is 600
<192.168.0.222> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.222> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" && echo ansible-tmp-1578073462.6235766-170009103658461="` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/network/nxos/nxos_command.py
<192.168.0.222> PUT /root/.ansible/tmp/ansible-local-26623ludka1vp/tmpbfo43_rp TO /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py
<192.168.0.222> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'python3 /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 223, in <module>
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 191, in main
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 1223, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 565, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py", line 185, in __rpc__
ansible.module_utils.connection.ConnectionError: timed out
fatal: [mynxosdevice]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 223, in <module>\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 191, in main\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 1223, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 565, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py\", line 185, in __rpc__\nansible.module_utils.connection.ConnectionError: timed out\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/66179
|
https://github.com/ansible/ansible/pull/66267
|
f695f401333a4875277aa85f8ebeeb2edb28cd1d
|
0a3a81bd12a1840caa7d1f3f7e1e77fdf03b4bcc
| 2020-01-03T18:00:54Z |
python
| 2020-01-11T04:09:21Z |
lib/ansible/plugins/connection/netconf.py
|
# (c) 2016 Red Hat Inc.
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
---
author: Ansible Networking Team
connection: netconf
short_description: Provides a persistent connection using the netconf protocol
description:
- This connection plugin provides a connection to remote devices over the
SSH NETCONF subsystem. This connection plugin is typically used by
network devices for sending and receiving RPC calls over NETCONF.
- Note this connection plugin requires ncclient to be installed on the
local Ansible controller.
version_added: "2.3"
requirements:
- ncclient
options:
host:
description:
- Specifies the remote device FQDN or IP address to establish the SSH
connection to.
default: inventory_hostname
vars:
- name: ansible_host
port:
type: int
description:
- Specifies the port on the remote device that listens for connections
when establishing the SSH connection.
default: 830
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
network_os:
description:
- Configures the device platform network operating system. This value is
used to load a device specific netconf plugin. If this option is not
configured (or set to C(auto)), then Ansible will attempt to guess the
correct network_os to use.
If it can not guess a network_os correctly it will use C(default).
vars:
- name: ansible_network_os
remote_user:
description:
- The username used to authenticate to the remote device when the SSH
connection is first established. If the remote_user is not specified,
the connection will use the username of the logged in user.
- Can be configured from the CLI via the C(--user) or C(-u) options.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
password:
description:
- Configures the user password used to authenticate to the remote device
when first establishing the SSH connection.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
- name: ansible_netconf_password
private_key_file:
description:
- The private SSH key or certificate file used to authenticate to the
remote device when first establishing the SSH connection.
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
timeout:
type: int
description:
- Sets the connection time, in seconds, for communicating with the
remote device. This timeout is used as the default timeout value when
awaiting a response after issuing a call to a RPC. If the RPC
does not return in timeout seconds, an error is generated.
default: 120
look_for_keys:
default: True
description:
- Enables looking for ssh keys in the usual locations for ssh keys (e.g. :file:`~/.ssh/id_*`).
env:
- name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS
ini:
- section: paramiko_connection
key: look_for_keys
type: boolean
host_key_checking:
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
type: boolean
default: True
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
- name: ANSIBLE_NETCONF_HOST_KEY_CHECKING
ini:
- section: defaults
key: host_key_checking
- section: paramiko_connection
key: host_key_checking
vars:
- name: ansible_host_key_checking
- name: ansible_ssh_host_key_checking
- name: ansible_netconf_host_key_checking
persistent_connect_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait when trying to
initially establish a persistent connection. If this value expires
before the connection to the remote device is completed, the connection
will fail.
default: 30
ini:
- section: persistent_connection
key: connect_timeout
env:
- name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT
vars:
- name: ansible_connect_timeout
persistent_command_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait for a command to
return from the remote device. If this timer is exceeded before the
command returns, the connection plugin will raise an exception and
close.
default: 30
ini:
- section: persistent_connection
key: command_timeout
env:
- name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT
vars:
- name: ansible_command_timeout
netconf_ssh_config:
description:
- This variable is used to enable bastion/jump host with netconf connection. If set to
True the bastion/jump host ssh settings should be present in ~/.ssh/config file,
alternatively it can be set to custom ssh configuration file path to read the
bastion/jump host settings.
ini:
- section: netconf_connection
key: ssh_config
version_added: '2.7'
env:
- name: ANSIBLE_NETCONF_SSH_CONFIG
vars:
- name: ansible_netconf_ssh_config
version_added: '2.7'
persistent_log_messages:
type: boolean
description:
- This flag will enable logging the command executed and response received from
target device in the ansible log file. For this option to work 'log_path' ansible
configuration option is required to be set to a file path with write access.
- Be sure to fully understand the security implications of enabling this
option as it could create a security vulnerability by logging sensitive information in log file.
default: False
ini:
- section: persistent_connection
key: log_messages
env:
- name: ANSIBLE_PERSISTENT_LOG_MESSAGES
vars:
- name: ansible_persistent_log_messages
"""
import os
import logging
import json
from ansible.errors import AnsibleConnectionFailure, AnsibleError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import missing_required_lib
from ansible.module_utils.parsing.convert_bool import BOOLEANS_TRUE, BOOLEANS_FALSE
from ansible.plugins.loader import netconf_loader
from ansible.plugins.connection import NetworkConnectionBase, ensure_connect
try:
from ncclient import manager
from ncclient.operations import RPCError
from ncclient.transport.errors import SSHUnknownHostError
from ncclient.xml_ import to_ele, to_xml
HAS_NCCLIENT = True
NCCLIENT_IMP_ERR = None
except (ImportError, AttributeError) as err: # paramiko and gssapi are incompatible and raise AttributeError not ImportError
HAS_NCCLIENT = False
NCCLIENT_IMP_ERR = err
logging.getLogger('ncclient').setLevel(logging.INFO)
class Connection(NetworkConnectionBase):
"""NetConf connections"""
transport = 'netconf'
has_pipelining = False
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
# If network_os is not specified then set the network os to auto
# This will be used to trigger the the use of guess_network_os when connecting.
self._network_os = self._network_os or 'auto'
self.netconf = netconf_loader.get(self._network_os, self)
if self.netconf:
self._sub_plugin = {'type': 'netconf', 'name': self.netconf._load_name, 'obj': self.netconf}
self.queue_message('vvvv', 'loaded netconf plugin %s from path %s for network_os %s' %
(self.netconf._load_name, self.netconf._original_path, self._network_os))
else:
self.netconf = netconf_loader.get("default", self)
self._sub_plugin = {'type': 'netconf', 'name': 'default', 'obj': self.netconf}
self.queue_message('display', 'unable to load netconf plugin for network_os %s, falling back to default plugin' % self._network_os)
self.queue_message('log', 'network_os is set to %s' % self._network_os)
self._manager = None
self.key_filename = None
self._ssh_config = None
def exec_command(self, cmd, in_data=None, sudoable=True):
"""Sends the request to the node and returns the reply
The method accepts two forms of request. The first form is as a byte
string that represents xml string be send over netconf session.
The second form is a json-rpc (2.0) byte string.
"""
if self._manager:
# to_ele operates on native strings
request = to_ele(to_native(cmd, errors='surrogate_or_strict'))
if request is None:
return 'unable to parse request'
try:
reply = self._manager.rpc(request)
except RPCError as exc:
error = self.internal_error(data=to_text(to_xml(exc.xml), errors='surrogate_or_strict'))
return json.dumps(error)
return reply.data_xml
else:
return super(Connection, self).exec_command(cmd, in_data, sudoable)
@property
@ensure_connect
def manager(self):
return self._manager
def _connect(self):
if not HAS_NCCLIENT:
raise AnsibleError("%s: %s" % (missing_required_lib("ncclient"), to_native(NCCLIENT_IMP_ERR)))
self.queue_message('log', 'ssh connection done, starting ncclient')
allow_agent = True
if self._play_context.password is not None:
allow_agent = False
setattr(self._play_context, 'allow_agent', allow_agent)
self.key_filename = self._play_context.private_key_file or self.get_option('private_key_file')
if self.key_filename:
self.key_filename = str(os.path.expanduser(self.key_filename))
self._ssh_config = self.get_option('netconf_ssh_config')
if self._ssh_config in BOOLEANS_TRUE:
self._ssh_config = True
elif self._ssh_config in BOOLEANS_FALSE:
self._ssh_config = None
# Try to guess the network_os if the network_os is set to auto
if self._network_os == 'auto':
for cls in netconf_loader.all(class_only=True):
network_os = cls.guess_network_os(self)
if network_os:
self.queue_message('vvv', 'discovered network_os %s' % network_os)
self._network_os = network_os
# If we have tried to detect the network_os but were unable to i.e. network_os is still 'auto'
# then use default as the network_os
if self._network_os == 'auto':
# Network os not discovered. Set it to default
self.queue_message('vvv', 'Unable to discover network_os. Falling back to default.')
self._network_os = 'default'
try:
ncclient_device_handler = self.netconf.get_option('ncclient_device_handler')
except KeyError:
ncclient_device_handler = 'default'
self.queue_message('vvv', 'identified ncclient device handler: %s.' % ncclient_device_handler)
device_params = {'name': ncclient_device_handler}
try:
port = self._play_context.port or 830
self.queue_message('vvv', "ESTABLISH NETCONF SSH CONNECTION FOR USER: %s on PORT %s TO %s WITH SSH_CONFIG = %s" %
(self._play_context.remote_user, port, self._play_context.remote_addr, self._ssh_config))
self._manager = manager.connect(
host=self._play_context.remote_addr,
port=port,
username=self._play_context.remote_user,
password=self._play_context.password,
key_filename=self.key_filename,
hostkey_verify=self.get_option('host_key_checking'),
look_for_keys=self.get_option('look_for_keys'),
device_params=device_params,
allow_agent=self._play_context.allow_agent,
timeout=self.get_option('persistent_connect_timeout'),
ssh_config=self._ssh_config
)
self._manager._timeout = self.get_option('persistent_command_timeout')
except SSHUnknownHostError as exc:
raise AnsibleConnectionFailure(to_native(exc))
except ImportError:
raise AnsibleError("connection=netconf is not supported on {0}".format(self._network_os))
if not self._manager.connected:
return 1, b'', b'not connected'
self.queue_message('log', 'ncclient manager object created successfully')
self._connected = True
super(Connection, self)._connect()
return 0, to_bytes(self._manager.session_id, errors='surrogate_or_strict'), b''
def close(self):
if self._manager:
self._manager.close_session()
super(Connection, self).close()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,179 |
ansible_command|connect_timeout vs httpapi timeout
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
httpapi timeout is hardcoded with 120 seconds in the DOCUMENTATION area of plugins/connection/httpapi.py and cannot be set dynamically to higher values. Setting ansible_command_timeout and ansible_connect_timeout to values higher than 120 still raise a ansible.module_utils.connection.ConnectionError: timed out error after 120 seconds
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
httpapi.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /rnx/config/ansible/xlab/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
root@localhost:/rnx/config/ansible/xlab# ansible-config dump --only-changed
CACHE_PLUGIN(/rnx/config/ansible/xlab/ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/rnx/config/ansible/xlab/ansible.cfg) = <xxxx>:6379:5
CACHE_PLUGIN_PREFIX(/rnx/config/ansible/xlab/ansible.cfg) = ansible_facts_
PERSISTENT_COMMAND_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 500
PERSISTENT_CONNECT_TIMEOUT(/rnx/config/ansible/xlab/ansible.cfg) = 600
ansible_connection: httpapi
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Software
BIOS: version 05.33
NXOS: version 9.3(1)
BIOS compile time: 09/08/2018
NXOS image file is: bootflash:///nxos.9.3.1.bin
NXOS compile time: 7/18/2019 15:00:00 [07/19/2019 00:04:48]
Hardware
cisco Nexus9000 C93180YC-FX Chassis
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
date; ansible-playbook -vvvv -i hosts.yml <example-playbook-below.yml>; date
If you use 'sleep 115' it works.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts:
mynxosdevice
check_mode: no
gather_facts: no
tasks:
- nxos_command:
commands:
- sleep 125
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ok: [mynxosdevice] => {
"changed": false,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": null,
"commands": [
"sleep 125"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"transport": null,
"use_ssl": null,
"username": null,
"validate_certs": null,
"wait_for": null
}
},
"stdout": [
{}
],
"stdout_lines": [
{}
]
}
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook runs in timeout.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [nxos_command] ************************************************************************************************************
task path: /rnx/config/ansible/xlab/issues/pb_httpapi_timeout.yml:8
<192.168.0.222> attempting to start connection
<192.168.0.222> using connection plugin httpapi
<192.168.0.222> found existing local domain socket, using it!
<192.168.0.222> updating play_context for connection
<192.168.0.222>
<192.168.0.222> local domain socket path is /root/.ansible/pc/c88162470b
<192.168.0.222> PERSISTENT_COMMAND_TIMEOUT is 500
<192.168.0.222> PERSISTENT_CONNECT_TIMEOUT is 600
<192.168.0.222> ESTABLISH LOCAL CONNECTION FOR USER: root
<192.168.0.222> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" && echo ansible-tmp-1578073462.6235766-170009103658461="` echo /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/network/nxos/nxos_command.py
<192.168.0.222> PUT /root/.ansible/tmp/ansible-local-26623ludka1vp/tmpbfo43_rp TO /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py
<192.168.0.222> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'python3 /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py && sleep 0'
<192.168.0.222> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 223, in <module>
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py", line 191, in main
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 1223, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py", line 565, in run_commands
File "/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py", line 185, in __rpc__
ansible.module_utils.connection.ConnectionError: timed out
fatal: [mynxosdevice]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-26623ludka1vp/ansible-tmp-1578073462.6235766-170009103658461/AnsiballZ_nxos_command.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_command', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 223, in <module>\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/modules/network/nxos/nxos_command.py\", line 191, in main\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 1223, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/network/nxos/nxos.py\", line 565, in run_commands\n File \"/tmp/ansible_nxos_command_payload_opkv6o7_/ansible_nxos_command_payload.zip/ansible/module_utils/connection.py\", line 185, in __rpc__\nansible.module_utils.connection.ConnectionError: timed out\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/66179
|
https://github.com/ansible/ansible/pull/66267
|
f695f401333a4875277aa85f8ebeeb2edb28cd1d
|
0a3a81bd12a1840caa7d1f3f7e1e77fdf03b4bcc
| 2020-01-03T18:00:54Z |
python
| 2020-01-11T04:09:21Z |
lib/ansible/plugins/connection/network_cli.py
|
# (c) 2016 Red Hat Inc.
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
---
author: Ansible Networking Team
connection: network_cli
short_description: Use network_cli to run command on network appliances
description:
- This connection plugin provides a connection to remote devices over the
SSH and implements a CLI shell. This connection plugin is typically used by
network devices for sending and receiving CLi commands to network devices.
version_added: "2.3"
options:
host:
description:
- Specifies the remote device FQDN or IP address to establish the SSH
connection to.
default: inventory_hostname
vars:
- name: ansible_host
port:
type: int
description:
- Specifies the port on the remote device that listens for connections
when establishing the SSH connection.
default: 22
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
network_os:
description:
- Configures the device platform network operating system. This value is
used to load the correct terminal and cliconf plugins to communicate
with the remote device.
vars:
- name: ansible_network_os
remote_user:
description:
- The username used to authenticate to the remote device when the SSH
connection is first established. If the remote_user is not specified,
the connection will use the username of the logged in user.
- Can be configured from the CLI via the C(--user) or C(-u) options.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
password:
description:
- Configures the user password used to authenticate to the remote device
when first establishing the SSH connection.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
private_key_file:
description:
- The private SSH key or certificate file used to authenticate to the
remote device when first establishing the SSH connection.
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
timeout:
type: int
description:
- Sets the connection time, in seconds, for communicating with the
remote device. This timeout is used as the default timeout value for
commands when issuing a command to the network CLI. If the command
does not return in timeout seconds, an error is generated.
default: 120
become:
type: boolean
description:
- The become option will instruct the CLI session to attempt privilege
escalation on platforms that support it. Normally this means
transitioning from user mode to C(enable) mode in the CLI session.
If become is set to True and the remote device does not support
privilege escalation or the privilege has already been elevated, then
this option is silently ignored.
- Can be configured from the CLI via the C(--become) or C(-b) options.
default: False
ini:
- section: privilege_escalation
key: become
env:
- name: ANSIBLE_BECOME
vars:
- name: ansible_become
become_method:
description:
- This option allows the become method to be specified in for handling
privilege escalation. Typically the become_method value is set to
C(enable) but could be defined as other values.
default: sudo
ini:
- section: privilege_escalation
key: become_method
env:
- name: ANSIBLE_BECOME_METHOD
vars:
- name: ansible_become_method
host_key_auto_add:
type: boolean
description:
- By default, Ansible will prompt the user before adding SSH keys to the
known hosts file. Since persistent connections such as network_cli run
in background processes, the user will never be prompted. By enabling
this option, unknown host keys will automatically be added to the
known hosts file.
- Be sure to fully understand the security implications of enabling this
option on production systems as it could create a security vulnerability.
default: False
ini:
- section: paramiko_connection
key: host_key_auto_add
env:
- name: ANSIBLE_HOST_KEY_AUTO_ADD
persistent_connect_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait when trying to
initially establish a persistent connection. If this value expires
before the connection to the remote device is completed, the connection
will fail.
default: 30
ini:
- section: persistent_connection
key: connect_timeout
env:
- name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT
vars:
- name: ansible_connect_timeout
persistent_command_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait for a command to
return from the remote device. If this timer is exceeded before the
command returns, the connection plugin will raise an exception and
close.
default: 30
ini:
- section: persistent_connection
key: command_timeout
env:
- name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT
vars:
- name: ansible_command_timeout
persistent_buffer_read_timeout:
type: float
description:
- Configures, in seconds, the amount of time to wait for the data to be read
from Paramiko channel after the command prompt is matched. This timeout
value ensures that command prompt matched is correct and there is no more data
left to be received from remote host.
default: 0.1
ini:
- section: persistent_connection
key: buffer_read_timeout
env:
- name: ANSIBLE_PERSISTENT_BUFFER_READ_TIMEOUT
vars:
- name: ansible_buffer_read_timeout
persistent_log_messages:
type: boolean
description:
- This flag will enable logging the command executed and response received from
target device in the ansible log file. For this option to work 'log_path' ansible
configuration option is required to be set to a file path with write access.
- Be sure to fully understand the security implications of enabling this
option as it could create a security vulnerability by logging sensitive information in log file.
default: False
ini:
- section: persistent_connection
key: log_messages
env:
- name: ANSIBLE_PERSISTENT_LOG_MESSAGES
vars:
- name: ansible_persistent_log_messages
terminal_stdout_re:
type: list
elements: dict
version_added: '2.9'
description:
- A single regex pattern or a sequence of patterns along with optional flags
to match the command prompt from the received response chunk. This option
accepts C(pattern) and C(flags) keys. The value of C(pattern) is a python
regex pattern to match the response and the value of C(flags) is the value
accepted by I(flags) argument of I(re.compile) python method to control
the way regex is matched with the response, for example I('re.I').
vars:
- name: ansible_terminal_stdout_re
terminal_stderr_re:
type: list
elements: dict
version_added: '2.9'
description:
- This option provides the regex pattern and optional flags to match the
error string from the received response chunk. This option
accepts C(pattern) and C(flags) keys. The value of C(pattern) is a python
regex pattern to match the response and the value of C(flags) is the value
accepted by I(flags) argument of I(re.compile) python method to control
the way regex is matched with the response, for example I('re.I').
vars:
- name: ansible_terminal_stderr_re
terminal_initial_prompt:
type: list
version_added: '2.9'
description:
- A single regex pattern or a sequence of patterns to evaluate the expected
prompt at the time of initial login to the remote host.
vars:
- name: ansible_terminal_initial_prompt
terminal_initial_answer:
type: list
version_added: '2.9'
description:
- The answer to reply with if the C(terminal_initial_prompt) is matched. The value can be a single answer
or a list of answers for multiple terminal_initial_prompt. In case the login menu has
multiple prompts the sequence of the prompt and excepted answer should be in same order and the value
of I(terminal_prompt_checkall) should be set to I(True) if all the values in C(terminal_initial_prompt) are
expected to be matched and set to I(False) if any one login prompt is to be matched.
vars:
- name: ansible_terminal_initial_answer
terminal_initial_prompt_checkall:
type: boolean
version_added: '2.9'
description:
- By default the value is set to I(False) and any one of the prompts mentioned in C(terminal_initial_prompt)
option is matched it won't check for other prompts. When set to I(True) it will check for all the prompts
mentioned in C(terminal_initial_prompt) option in the given order and all the prompts
should be received from remote host if not it will result in timeout.
default: False
vars:
- name: ansible_terminal_initial_prompt_checkall
terminal_inital_prompt_newline:
type: boolean
version_added: '2.9'
description:
- This boolean flag, that when set to I(True) will send newline in the response if any of values
in I(terminal_initial_prompt) is matched.
default: True
vars:
- name: ansible_terminal_initial_prompt_newline
network_cli_retries:
description:
- Number of attempts to connect to remote host. The delay time between the retires increases after
every attempt by power of 2 in seconds till either the maximum attempts are exhausted or any of the
C(persistent_command_timeout) or C(persistent_connect_timeout) timers are triggered.
default: 3
version_added: '2.9'
type: integer
env:
- name: ANSIBLE_NETWORK_CLI_RETRIES
ini:
- section: persistent_connection
key: network_cli_retries
vars:
- name: ansible_network_cli_retries
"""
from functools import wraps
import getpass
import json
import logging
import re
import os
import signal
import socket
import time
import traceback
from io import BytesIO
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves import cPickle
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils._text import to_bytes, to_text
from ansible.playbook.play_context import PlayContext
from ansible.plugins.connection import NetworkConnectionBase
from ansible.plugins.loader import cliconf_loader, terminal_loader, connection_loader
def ensure_connect(func):
@wraps(func)
def wrapped(self, *args, **kwargs):
if not self._connected:
self._connect()
self.update_cli_prompt_context()
return func(self, *args, **kwargs)
return wrapped
class AnsibleCmdRespRecv(Exception):
pass
class Connection(NetworkConnectionBase):
''' CLI (shell) SSH connections on Paramiko '''
transport = 'network_cli'
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._ssh_shell = None
self._matched_prompt = None
self._matched_cmd_prompt = None
self._matched_pattern = None
self._last_response = None
self._history = list()
self._command_response = None
self._last_recv_window = None
self._terminal = None
self.cliconf = None
self._paramiko_conn = None
# Managing prompt context
self._check_prompt = False
self._task_uuid = to_text(kwargs.get('task_uuid', ''))
if self._play_context.verbosity > 3:
logging.getLogger('paramiko').setLevel(logging.DEBUG)
if self._network_os:
self._terminal = terminal_loader.get(self._network_os, self)
if not self._terminal:
raise AnsibleConnectionFailure('network os %s is not supported' % self._network_os)
self.cliconf = cliconf_loader.get(self._network_os, self)
if self.cliconf:
self._sub_plugin = {'type': 'cliconf', 'name': self.cliconf._load_name, 'obj': self.cliconf}
self.queue_message('vvvv', 'loaded cliconf plugin %s from path %s for network_os %s' %
(self.cliconf._load_name, self.cliconf._original_path, self._network_os))
else:
self.queue_message('vvvv', 'unable to load cliconf for network_os %s' % self._network_os)
else:
raise AnsibleConnectionFailure(
'Unable to automatically determine host network os. Please '
'manually configure ansible_network_os value for this host'
)
self.queue_message('log', 'network_os is set to %s' % self._network_os)
@property
def paramiko_conn(self):
if self._paramiko_conn is None:
self._paramiko_conn = connection_loader.get('paramiko', self._play_context, '/dev/null')
self._paramiko_conn.set_options(direct={'look_for_keys': not bool(self._play_context.password and not self._play_context.private_key_file)})
return self._paramiko_conn
def _get_log_channel(self):
name = "p=%s u=%s | " % (os.getpid(), getpass.getuser())
name += "paramiko [%s]" % self._play_context.remote_addr
return name
@ensure_connect
def get_prompt(self):
"""Returns the current prompt from the device"""
return self._matched_prompt
def exec_command(self, cmd, in_data=None, sudoable=True):
# this try..except block is just to handle the transition to supporting
# network_cli as a toplevel connection. Once connection=local is gone,
# this block can be removed as well and all calls passed directly to
# the local connection
if self._ssh_shell:
try:
cmd = json.loads(to_text(cmd, errors='surrogate_or_strict'))
kwargs = {'command': to_bytes(cmd['command'], errors='surrogate_or_strict')}
for key in ('prompt', 'answer', 'sendonly', 'newline', 'prompt_retry_check'):
if cmd.get(key) is True or cmd.get(key) is False:
kwargs[key] = cmd[key]
elif cmd.get(key) is not None:
kwargs[key] = to_bytes(cmd[key], errors='surrogate_or_strict')
return self.send(**kwargs)
except ValueError:
cmd = to_bytes(cmd, errors='surrogate_or_strict')
return self.send(command=cmd)
else:
return super(Connection, self).exec_command(cmd, in_data, sudoable)
def update_play_context(self, pc_data):
"""Updates the play context information for the connection"""
pc_data = to_bytes(pc_data)
if PY3:
pc_data = cPickle.loads(pc_data, encoding='bytes')
else:
pc_data = cPickle.loads(pc_data)
play_context = PlayContext()
play_context.deserialize(pc_data)
self.queue_message('vvvv', 'updating play_context for connection')
if self._play_context.become ^ play_context.become:
if play_context.become is True:
auth_pass = play_context.become_pass
self._terminal.on_become(passwd=auth_pass)
self.queue_message('vvvv', 'authorizing connection')
else:
self._terminal.on_unbecome()
self.queue_message('vvvv', 'deauthorizing connection')
self._play_context = play_context
if hasattr(self, 'reset_history'):
self.reset_history()
if hasattr(self, 'disable_response_logging'):
self.disable_response_logging()
def set_check_prompt(self, task_uuid):
self._check_prompt = task_uuid
def update_cli_prompt_context(self):
# set cli prompt context at the start of new task run only
if self._check_prompt and self._task_uuid != self._check_prompt:
self._task_uuid, self._check_prompt = self._check_prompt, False
self.set_cli_prompt_context()
def _connect(self):
'''
Connects to the remote device and starts the terminal
'''
if not self.connected:
self.paramiko_conn._set_log_channel(self._get_log_channel())
self.paramiko_conn.force_persistence = self.force_persistence
command_timeout = self.get_option('persistent_command_timeout')
max_pause = min([self.get_option('persistent_connect_timeout'), command_timeout])
retries = self.get_option('network_cli_retries')
total_pause = 0
for attempt in range(retries + 1):
try:
ssh = self.paramiko_conn._connect()
break
except Exception as e:
pause = 2 ** (attempt + 1)
if attempt == retries or total_pause >= max_pause:
raise AnsibleConnectionFailure(to_text(e, errors='surrogate_or_strict'))
else:
msg = (u"network_cli_retry: attempt: %d, caught exception(%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e, errors='surrogate_or_strict'), pause))
self.queue_message('vv', msg)
time.sleep(pause)
total_pause += pause
continue
self.queue_message('vvvv', 'ssh connection done, setting terminal')
self._connected = True
self._ssh_shell = ssh.ssh.invoke_shell()
self._ssh_shell.settimeout(command_timeout)
self.queue_message('vvvv', 'loaded terminal plugin for network_os %s' % self._network_os)
terminal_initial_prompt = self.get_option('terminal_initial_prompt') or self._terminal.terminal_initial_prompt
terminal_initial_answer = self.get_option('terminal_initial_answer') or self._terminal.terminal_initial_answer
newline = self.get_option('terminal_inital_prompt_newline') or self._terminal.terminal_inital_prompt_newline
check_all = self.get_option('terminal_initial_prompt_checkall') or False
self.receive(prompts=terminal_initial_prompt, answer=terminal_initial_answer, newline=newline, check_all=check_all)
if self._play_context.become:
self.queue_message('vvvv', 'firing event: on_become')
auth_pass = self._play_context.become_pass
self._terminal.on_become(passwd=auth_pass)
self.queue_message('vvvv', 'firing event: on_open_shell()')
self._terminal.on_open_shell()
self.queue_message('vvvv', 'ssh connection has completed successfully')
return self
def close(self):
'''
Close the active connection to the device
'''
# only close the connection if its connected.
if self._connected:
self.queue_message('debug', "closing ssh connection to device")
if self._ssh_shell:
self.queue_message('debug', "firing event: on_close_shell()")
self._terminal.on_close_shell()
self._ssh_shell.close()
self._ssh_shell = None
self.queue_message('debug', "cli session is now closed")
self.paramiko_conn.close()
self._paramiko_conn = None
self.queue_message('debug', "ssh connection has been closed successfully")
super(Connection, self).close()
def receive(self, command=None, prompts=None, answer=None, newline=True, prompt_retry_check=False, check_all=False):
'''
Handles receiving of output from command
'''
self._matched_prompt = None
self._matched_cmd_prompt = None
recv = BytesIO()
handled = False
command_prompt_matched = False
matched_prompt_window = window_count = 0
# set terminal regex values for command prompt and errors in response
self._terminal_stderr_re = self._get_terminal_std_re('terminal_stderr_re')
self._terminal_stdout_re = self._get_terminal_std_re('terminal_stdout_re')
cache_socket_timeout = self._ssh_shell.gettimeout()
command_timeout = self.get_option('persistent_command_timeout')
self._validate_timeout_value(command_timeout, "persistent_command_timeout")
if cache_socket_timeout != command_timeout:
self._ssh_shell.settimeout(command_timeout)
buffer_read_timeout = self.get_option('persistent_buffer_read_timeout')
self._validate_timeout_value(buffer_read_timeout, "persistent_buffer_read_timeout")
self._log_messages("command: %s" % command)
while True:
if command_prompt_matched:
try:
signal.signal(signal.SIGALRM, self._handle_buffer_read_timeout)
signal.setitimer(signal.ITIMER_REAL, buffer_read_timeout)
data = self._ssh_shell.recv(256)
signal.alarm(0)
self._log_messages("response-%s: %s" % (window_count + 1, data))
# if data is still received on channel it indicates the prompt string
# is wrongly matched in between response chunks, continue to read
# remaining response.
command_prompt_matched = False
# restart command_timeout timer
signal.signal(signal.SIGALRM, self._handle_command_timeout)
signal.alarm(command_timeout)
except AnsibleCmdRespRecv:
# reset socket timeout to global timeout
self._ssh_shell.settimeout(cache_socket_timeout)
return self._command_response
else:
data = self._ssh_shell.recv(256)
self._log_messages("response-%s: %s" % (window_count + 1, data))
# when a channel stream is closed, received data will be empty
if not data:
break
recv.write(data)
offset = recv.tell() - 256 if recv.tell() > 256 else 0
recv.seek(offset)
window = self._strip(recv.read())
self._last_recv_window = window
window_count += 1
if prompts and not handled:
handled = self._handle_prompt(window, prompts, answer, newline, False, check_all)
matched_prompt_window = window_count
elif prompts and handled and prompt_retry_check and matched_prompt_window + 1 == window_count:
# check again even when handled, if same prompt repeats in next window
# (like in the case of a wrong enable password, etc) indicates
# value of answer is wrong, report this as error.
if self._handle_prompt(window, prompts, answer, newline, prompt_retry_check, check_all):
raise AnsibleConnectionFailure("For matched prompt '%s', answer is not valid" % self._matched_cmd_prompt)
if self._find_prompt(window):
self._last_response = recv.getvalue()
resp = self._strip(self._last_response)
self._command_response = self._sanitize(resp, command)
if buffer_read_timeout == 0.0:
# reset socket timeout to global timeout
self._ssh_shell.settimeout(cache_socket_timeout)
return self._command_response
else:
command_prompt_matched = True
@ensure_connect
def send(self, command, prompt=None, answer=None, newline=True, sendonly=False, prompt_retry_check=False, check_all=False):
'''
Sends the command to the device in the opened shell
'''
if check_all:
prompt_len = len(to_list(prompt))
answer_len = len(to_list(answer))
if prompt_len != answer_len:
raise AnsibleConnectionFailure("Number of prompts (%s) is not same as that of answers (%s)" % (prompt_len, answer_len))
try:
cmd = b'%s\r' % command
self._history.append(cmd)
self._ssh_shell.sendall(cmd)
self._log_messages('send command: %s' % cmd)
if sendonly:
return
response = self.receive(command, prompt, answer, newline, prompt_retry_check, check_all)
return to_text(response, errors='surrogate_then_replace')
except (socket.timeout, AttributeError):
self.queue_message('error', traceback.format_exc())
raise AnsibleConnectionFailure("timeout value %s seconds reached while trying to send command: %s"
% (self._ssh_shell.gettimeout(), command.strip()))
def _handle_buffer_read_timeout(self, signum, frame):
self.queue_message('vvvv', "Response received, triggered 'persistent_buffer_read_timeout' timer of %s seconds" %
self.get_option('persistent_buffer_read_timeout'))
raise AnsibleCmdRespRecv()
def _handle_command_timeout(self, signum, frame):
msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\
% self.get_option('persistent_command_timeout')
self.queue_message('log', msg)
raise AnsibleConnectionFailure(msg)
def _strip(self, data):
'''
Removes ANSI codes from device response
'''
for regex in self._terminal.ansi_re:
data = regex.sub(b'', data)
return data
def _handle_prompt(self, resp, prompts, answer, newline, prompt_retry_check=False, check_all=False):
'''
Matches the command prompt and responds
:arg resp: Byte string containing the raw response from the remote
:arg prompts: Sequence of byte strings that we consider prompts for input
:arg answer: Sequence of Byte string to send back to the remote if we find a prompt.
A carriage return is automatically appended to this string.
:param prompt_retry_check: Bool value for trying to detect more prompts
:param check_all: Bool value to indicate if all the values in prompt sequence should be matched or any one of
given prompt.
:returns: True if a prompt was found in ``resp``. If check_all is True
will True only after all the prompt in the prompts list are matched. False otherwise.
'''
single_prompt = False
if not isinstance(prompts, list):
prompts = [prompts]
single_prompt = True
if not isinstance(answer, list):
answer = [answer]
prompts_regex = [re.compile(to_bytes(r), re.I) for r in prompts]
for index, regex in enumerate(prompts_regex):
match = regex.search(resp)
if match:
self._matched_cmd_prompt = match.group()
self._log_messages("matched command prompt: %s" % self._matched_cmd_prompt)
# if prompt_retry_check is enabled to check if same prompt is
# repeated don't send answer again.
if not prompt_retry_check:
prompt_answer = answer[index] if len(answer) > index else answer[0]
self._ssh_shell.sendall(b'%s' % prompt_answer)
if newline:
self._ssh_shell.sendall(b'\r')
prompt_answer += b'\r'
self._log_messages("matched command prompt answer: %s" % prompt_answer)
if check_all and prompts and not single_prompt:
prompts.pop(0)
answer.pop(0)
return False
return True
return False
def _sanitize(self, resp, command=None):
'''
Removes elements from the response before returning to the caller
'''
cleaned = []
for line in resp.splitlines():
if command and line.strip() == command.strip():
continue
for prompt in self._matched_prompt.strip().splitlines():
if prompt.strip() in line:
break
else:
cleaned.append(line)
return b'\n'.join(cleaned).strip()
def _find_prompt(self, response):
'''Searches the buffered response for a matching command prompt
'''
errored_response = None
is_error_message = False
for regex in self._terminal_stderr_re:
if regex.search(response):
is_error_message = True
# Check if error response ends with command prompt if not
# receive it buffered prompt
for regex in self._terminal_stdout_re:
match = regex.search(response)
if match:
errored_response = response
self._matched_pattern = regex.pattern
self._matched_prompt = match.group()
self._log_messages("matched error regex '%s' from response '%s'" % (self._matched_pattern, errored_response))
break
if not is_error_message:
for regex in self._terminal_stdout_re:
match = regex.search(response)
if match:
self._matched_pattern = regex.pattern
self._matched_prompt = match.group()
self._log_messages("matched cli prompt '%s' with regex '%s' from response '%s'" % (self._matched_prompt, self._matched_pattern, response))
if not errored_response:
return True
if errored_response:
raise AnsibleConnectionFailure(errored_response)
return False
def _validate_timeout_value(self, timeout, timer_name):
if timeout < 0:
raise AnsibleConnectionFailure("'%s' timer value '%s' is invalid, value should be greater than or equal to zero." % (timer_name, timeout))
def transport_test(self, connect_timeout):
"""This method enables wait_for_connection to work.
As it is used by wait_for_connection, it is called by that module's action plugin,
which is on the controller process, which means that nothing done on this instance
should impact the actual persistent connection... this check is for informational
purposes only and should be properly cleaned up.
"""
# Force a fresh connect if for some reason we have connected before.
self.close()
self._connect()
self.close()
def _get_terminal_std_re(self, option):
terminal_std_option = self.get_option(option)
terminal_std_re = []
if terminal_std_option:
for item in terminal_std_option:
if "pattern" not in item:
raise AnsibleConnectionFailure("'pattern' is a required key for option '%s',"
" received option value is %s" % (option, item))
pattern = br"%s" % to_bytes(item['pattern'])
flag = item.get('flags', 0)
if flag:
flag = getattr(re, flag.split('.')[1])
terminal_std_re.append(re.compile(pattern, flag))
else:
# To maintain backward compatibility
terminal_std_re = getattr(self._terminal, option)
return terminal_std_re
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,367 |
postgresql_query doesn't support non-ASCII characters in SQL files with Python3
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if
postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/roman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Local:
```
NAME=Fedora
VERSION="31 (Workstation Edition)"
```
Remote:
```
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
```
All managed hosts are RHEL 8 (Python3 only):
hosts.yml:
```
all:
vars:
ansible_python_interpreter: /usr/libexec/platform-python
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
main.yml:
```yaml
- name: "Copy files with scheme"
copy:
src: "scheme.sql"
dest: "/tmp/scheme.sql"
mode: '0600'
- name: "Create scheme"
postgresql_query:
login_user: "{{ postgresql_user }}"
login_password: "{{ postgresql_password }}"
login_host: "{{ postgresql_host }}"
db: "{{ postgresql_database }}"
path_to_script: "/tmp/scheme.sql"
```
scheme.sql:
```
INSERT INTO test(id, name) VALUES(1, 'Данные'); -- use non-ASCII
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
scheme.sql applied.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```paste below
"msg": "Cannot read file '/tmp/scheme.sql' : 'ascii ' codec can't decode byte 0xd0 in position 4342: ordinal not in range(128)"}
```
|
https://github.com/ansible/ansible/issues/65367
|
https://github.com/ansible/ansible/pull/66331
|
f8fb391548144ba84d28afac5f3701b40f2ab283
|
515c4a7e2c50319112a3257fb1f0db74683a8853
| 2019-11-29T14:56:52Z |
python
| 2020-01-11T14:19:55Z |
changelogs/fragments/66331-postgresql_query_fix_unable_to_handle_non_ascii_chars_when_python3.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,367 |
postgresql_query doesn't support non-ASCII characters in SQL files with Python3
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if
postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/roman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Local:
```
NAME=Fedora
VERSION="31 (Workstation Edition)"
```
Remote:
```
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
```
All managed hosts are RHEL 8 (Python3 only):
hosts.yml:
```
all:
vars:
ansible_python_interpreter: /usr/libexec/platform-python
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
main.yml:
```yaml
- name: "Copy files with scheme"
copy:
src: "scheme.sql"
dest: "/tmp/scheme.sql"
mode: '0600'
- name: "Create scheme"
postgresql_query:
login_user: "{{ postgresql_user }}"
login_password: "{{ postgresql_password }}"
login_host: "{{ postgresql_host }}"
db: "{{ postgresql_database }}"
path_to_script: "/tmp/scheme.sql"
```
scheme.sql:
```
INSERT INTO test(id, name) VALUES(1, 'Данные'); -- use non-ASCII
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
scheme.sql applied.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```paste below
"msg": "Cannot read file '/tmp/scheme.sql' : 'ascii ' codec can't decode byte 0xd0 in position 4342: ordinal not in range(128)"}
```
|
https://github.com/ansible/ansible/issues/65367
|
https://github.com/ansible/ansible/pull/66331
|
f8fb391548144ba84d28afac5f3701b40f2ab283
|
515c4a7e2c50319112a3257fb1f0db74683a8853
| 2019-11-29T14:56:52Z |
python
| 2020-01-11T14:19:55Z |
lib/ansible/modules/database/postgresql/postgresql_query.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Felix Archambault
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'supported_by': 'community',
'status': ['preview']
}
DOCUMENTATION = r'''
---
module: postgresql_query
short_description: Run PostgreSQL queries
description:
- Runs arbitrary PostgreSQL queries.
- Can run queries from SQL script files.
- Does not run against backup files. Use M(postgresql_db) with I(state=restore)
to run queries on files made by pg_dump/pg_dumpall utilities.
version_added: '2.8'
options:
query:
description:
- SQL query to run. Variables can be escaped with psycopg2 syntax
U(http://initd.org/psycopg/docs/usage.html).
type: str
positional_args:
description:
- List of values to be passed as positional arguments to the query.
When the value is a list, it will be converted to PostgreSQL array.
- Mutually exclusive with I(named_args).
type: list
named_args:
description:
- Dictionary of key-value arguments to pass to the query.
When the value is a list, it will be converted to PostgreSQL array.
- Mutually exclusive with I(positional_args).
type: dict
path_to_script:
description:
- Path to SQL script on the remote host.
- Returns result of the last query in the script.
- Mutually exclusive with I(query).
type: path
session_role:
description:
- Switch to session_role after connecting. The specified session_role must
be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though
the session_role were the one that had logged in originally.
type: str
db:
description:
- Name of database to connect to and run queries against.
type: str
aliases:
- login_db
autocommit:
description:
- Execute in autocommit mode when the query can't be run inside a transaction block
(e.g., VACUUM).
- Mutually exclusive with I(check_mode).
type: bool
default: no
version_added: '2.9'
seealso:
- module: postgresql_db
author:
- Felix Archambault (@archf)
- Andrew Klychkov (@Andersson007)
- Will Rouesnel (@wrouesnel)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Simple select query to acme db
postgresql_query:
db: acme
query: SELECT version()
- name: Select query to db acme with positional arguments and non-default credentials
postgresql_query:
db: acme
login_user: django
login_password: mysecretpass
query: SELECT * FROM acme WHERE id = %s AND story = %s
positional_args:
- 1
- test
- name: Select query to test_db with named_args
postgresql_query:
db: test_db
query: SELECT * FROM test WHERE id = %(id_val)s AND story = %(story_val)s
named_args:
id_val: 1
story_val: test
- name: Insert query to test_table in db test_db
postgresql_query:
db: test_db
query: INSERT INTO test_table (id, story) VALUES (2, 'my_long_story')
- name: Run queries from SQL script
postgresql_query:
db: test_db
path_to_script: /var/lib/pgsql/test.sql
positional_args:
- 1
- name: Example of using autocommit parameter
postgresql_query:
db: test_db
query: VACUUM
autocommit: yes
- name: >
Insert data to the column of array type using positional_args.
Note that we use quotes here, the same as for passing JSON, etc.
postgresql_query:
query: INSERT INTO test_table (array_column) VALUES (%s)
positional_args:
- '{1,2,3}'
# Pass list and string vars as positional_args
- name: Set vars
set_fact:
my_list:
- 1
- 2
- 3
my_arr: '{1, 2, 3}'
- name: Select from test table by passing positional_args as arrays
postgresql_query:
query: SELECT * FROM test_array_table WHERE arr_col1 = %s AND arr_col2 = %s
positional_args:
- '{{ my_list }}'
- '{{ my_arr|string }}'
'''
RETURN = r'''
query:
description: Query that was tried to be executed.
returned: always
type: str
sample: 'SELECT * FROM bar'
statusmessage:
description: Attribute containing the message returned by the command.
returned: always
type: str
sample: 'INSERT 0 1'
query_result:
description:
- List of dictionaries in column:value form representing returned rows.
returned: changed
type: list
sample: [{"Column": "Value1"},{"Column": "Value2"}]
rowcount:
description: Number of affected rows.
returned: changed
type: int
sample: 5
'''
try:
from psycopg2 import ProgrammingError as Psycopg2ProgrammingError
from psycopg2.extras import DictCursor
except ImportError:
# it is needed for checking 'no result to fetch' in main(),
# psycopg2 availability will be checked by connect_to_db() into
# ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.postgres import (
connect_to_db,
get_conn_params,
postgres_common_argument_spec,
)
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems
# ===========================================
# Module execution.
#
def list_to_pg_array(elem):
"""Convert the passed list to PostgreSQL array
represented as a string.
Args:
elem (list): List that needs to be converted.
Returns:
elem (str): String representation of PostgreSQL array.
"""
elem = str(elem).strip('[]')
elem = '{' + elem + '}'
return elem
def convert_elements_to_pg_arrays(obj):
"""Convert list elements of the passed object
to PostgreSQL arrays represented as strings.
Args:
obj (dict or list): Object whose elements need to be converted.
Returns:
obj (dict or list): Object with converted elements.
"""
if isinstance(obj, dict):
for (key, elem) in iteritems(obj):
if isinstance(elem, list):
obj[key] = list_to_pg_array(elem)
elif isinstance(obj, list):
for i, elem in enumerate(obj):
if isinstance(elem, list):
obj[i] = list_to_pg_array(elem)
return obj
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
query=dict(type='str'),
db=dict(type='str', aliases=['login_db']),
positional_args=dict(type='list'),
named_args=dict(type='dict'),
session_role=dict(type='str'),
path_to_script=dict(type='path'),
autocommit=dict(type='bool', default=False),
)
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=(('positional_args', 'named_args'),),
supports_check_mode=True,
)
query = module.params["query"]
positional_args = module.params["positional_args"]
named_args = module.params["named_args"]
path_to_script = module.params["path_to_script"]
autocommit = module.params["autocommit"]
if autocommit and module.check_mode:
module.fail_json(msg="Using autocommit is mutually exclusive with check_mode")
if path_to_script and query:
module.fail_json(msg="path_to_script is mutually exclusive with query")
if positional_args:
positional_args = convert_elements_to_pg_arrays(positional_args)
elif named_args:
named_args = convert_elements_to_pg_arrays(named_args)
if path_to_script:
try:
with open(path_to_script, 'r') as f:
query = f.read()
except Exception as e:
module.fail_json(msg="Cannot read file '%s' : %s" % (path_to_script, to_native(e)))
conn_params = get_conn_params(module, module.params)
db_connection = connect_to_db(module, conn_params, autocommit=autocommit)
cursor = db_connection.cursor(cursor_factory=DictCursor)
# Prepare args:
if module.params.get("positional_args"):
arguments = module.params["positional_args"]
elif module.params.get("named_args"):
arguments = module.params["named_args"]
else:
arguments = None
# Set defaults:
changed = False
# Execute query:
try:
cursor.execute(query, arguments)
except Exception as e:
if not autocommit:
db_connection.rollback()
cursor.close()
db_connection.close()
module.fail_json(msg="Cannot execute SQL '%s' %s: %s" % (query, arguments, to_native(e)))
statusmessage = cursor.statusmessage
rowcount = cursor.rowcount
try:
query_result = [dict(row) for row in cursor.fetchall()]
except Psycopg2ProgrammingError as e:
if to_native(e) == 'no results to fetch':
query_result = {}
except Exception as e:
module.fail_json(msg="Cannot fetch rows from cursor: %s" % to_native(e))
if 'SELECT' not in statusmessage:
if 'UPDATE' in statusmessage or 'INSERT' in statusmessage or 'DELETE' in statusmessage:
s = statusmessage.split()
if len(s) == 3:
if statusmessage.split()[2] != '0':
changed = True
elif len(s) == 2:
if statusmessage.split()[1] != '0':
changed = True
else:
changed = True
else:
changed = True
if module.check_mode:
db_connection.rollback()
else:
if not autocommit:
db_connection.commit()
kw = dict(
changed=changed,
query=cursor.query,
statusmessage=statusmessage,
query_result=query_result,
rowcount=rowcount if rowcount >= 0 else 0,
)
cursor.close()
db_connection.close()
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,367 |
postgresql_query doesn't support non-ASCII characters in SQL files with Python3
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if
postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/roman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Local:
```
NAME=Fedora
VERSION="31 (Workstation Edition)"
```
Remote:
```
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
```
All managed hosts are RHEL 8 (Python3 only):
hosts.yml:
```
all:
vars:
ansible_python_interpreter: /usr/libexec/platform-python
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
main.yml:
```yaml
- name: "Copy files with scheme"
copy:
src: "scheme.sql"
dest: "/tmp/scheme.sql"
mode: '0600'
- name: "Create scheme"
postgresql_query:
login_user: "{{ postgresql_user }}"
login_password: "{{ postgresql_password }}"
login_host: "{{ postgresql_host }}"
db: "{{ postgresql_database }}"
path_to_script: "/tmp/scheme.sql"
```
scheme.sql:
```
INSERT INTO test(id, name) VALUES(1, 'Данные'); -- use non-ASCII
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
scheme.sql applied.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```paste below
"msg": "Cannot read file '/tmp/scheme.sql' : 'ascii ' codec can't decode byte 0xd0 in position 4342: ordinal not in range(128)"}
```
|
https://github.com/ansible/ansible/issues/65367
|
https://github.com/ansible/ansible/pull/66331
|
f8fb391548144ba84d28afac5f3701b40f2ab283
|
515c4a7e2c50319112a3257fb1f0db74683a8853
| 2019-11-29T14:56:52Z |
python
| 2020-01-11T14:19:55Z |
test/integration/targets/postgresql_query/tasks/postgresql_query_initial.yml
|
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Prepare for tests:
- name: postgresql_query - drop test table if exists
become_user: "{{ pg_user }}"
become: yes
shell: psql postgres -U "{{ pg_user }}" -t -c "DROP TABLE IF EXISTS test_table;"
ignore_errors: yes
# Create test_table:
- name: postgresql_query - create test table called test_table
become_user: "{{ pg_user }}"
become: yes
shell: psql postgres -U "{{ pg_user }}" -t -c "CREATE TABLE test_table (id int, story text);"
ignore_errors: yes
- name: postgresql_query - insert some data into test_table
become_user: "{{ pg_user }}"
become: yes
shell: psql postgres -U "{{ pg_user }}" -t -c "INSERT INTO test_table (id, story) VALUES (1, 'first'), (2, 'second'), (3, 'third');"
ignore_errors: yes
# Prepare SQL script:
- name: postgresql_query - remove SQL script if exists
become: yes
file:
path: '~{{ pg_user}}/test.sql'
state: absent
ignore_errors: yes
- name: postgresql_query - create an empty file to check permission
become: yes
file:
path: '~{{ pg_user}}/test.sql'
state: touch
owner: '{{ pg_user }}'
group: '{{ pg_user }}'
mode: 0644
register: sql_file_created
ignore_errors: yes
- name: postgresql_query - prepare SQL script
become_user: "{{ pg_user }}"
become: yes
shell: 'echo "{{ item }}" >> ~{{ pg_user}}/test.sql'
ignore_errors: yes
with_items:
- SELECT version();
- SELECT story FROM test_table
- WHERE id = %s;
when: sql_file_created
##############
# Start tests:
#
# Run ANALYZE command:
- name: postgresql_query - analyze test_table
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: ANALYZE test_table
register: result
ignore_errors: yes
- assert:
that:
- result is changed
- result.query == 'ANALYZE test_table'
- result.rowcount == 0
- result.statusmessage == 'ANALYZE'
- result.query_result == {}
# Run queries from SQL script:
- name: postgresql_query - run queries from SQL script
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
path_to_script: '~{{ pg_user }}/test.sql'
positional_args:
- 1
register: result
ignore_errors: yes
when: sql_file_created
- assert:
that:
- result is not changed
- result.query == 'SELECT version();\nSELECT story FROM test_table\nWHERE id = 1;\n'
- result.rowcount == 1
- result.statusmessage == 'SELECT 1' or result.statusmessage == 'SELECT'
- result.query_result[0].story == 'first'
when: sql_file_created
# Simple select query:
- name: postgresql_query - simple select query to test_table
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: SELECT * FROM test_table
register: result
ignore_errors: yes
- assert:
that:
- result is not changed
- result.query == 'SELECT * FROM test_table'
- result.rowcount == 3
- result.statusmessage == 'SELECT 3' or result.statusmessage == 'SELECT'
- result.query_result[0].id == 1
- result.query_result[1].id == 2
- result.query_result[2].id == 3
- result.query_result[0].story == 'first'
- result.query_result[1].story == 'second'
- result.query_result[2].story == 'third'
# Select query with named_args:
- name: postgresql_query - select query with named args
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: SELECT id FROM test_table WHERE id = %(id_val)s AND story = %(story_val)s
named_args:
id_val: 1
story_val: first
register: result
ignore_errors: yes
- assert:
that:
- result is not changed
- result.query == "SELECT id FROM test_table WHERE id = 1 AND story = 'first'" or result.query == "SELECT id FROM test_table WHERE id = 1 AND story = E'first'"
- result.rowcount == 1
- result.statusmessage == 'SELECT 1' or result.statusmessage == 'SELECT'
- result.query_result[0].id == 1
# Select query with positional arguments:
- name: postgresql_query - select query with positional arguments
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: SELECT story FROM test_table WHERE id = %s AND story = %s
positional_args:
- 2
- second
register: result
ignore_errors: yes
- assert:
that:
- result is not changed
- result.query == "SELECT story FROM test_table WHERE id = 2 AND story = 'second'" or result.query == "SELECT story FROM test_table WHERE id = 2 AND story = E'second'"
- result.rowcount == 1
- result.statusmessage == 'SELECT 1' or result.statusmessage == 'SELECT'
- result.query_result[0].story == 'second'
# Simple update query (positional_args and named args were checked by the previous tests):
- name: postgresql_query - simple update query
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: UPDATE test_table SET story = 'new' WHERE id = 3
register: result
ignore_errors: yes
- assert:
that:
- result is changed
- result.query == "UPDATE test_table SET story = 'new' WHERE id = 3"
- result.rowcount == 1
- result.statusmessage == 'UPDATE 1'
- result.query_result == {}
# Check:
- name: check the previous update
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: SELECT * FROM test_table WHERE story = 'new' AND id = 3
register: result
- assert:
that:
- result.rowcount == 1
# Test check_mode:
- name: postgresql_query - simple update query in check_mode
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: UPDATE test_table SET story = 'CHECK_MODE' WHERE id = 3
register: result
check_mode: yes
- assert:
that:
- result is changed
- result.query == "UPDATE test_table SET story = 'CHECK_MODE' WHERE id = 3"
- result.rowcount == 1
- result.statusmessage == 'UPDATE 1'
- result.query_result == {}
# Check:
- name: check the previous update that nothing has been changed
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: SELECT * FROM test_table WHERE story = 'CHECK_MODE' AND id = 3
register: result
- assert:
that:
- result.rowcount == 0
# Try to update not existing row:
- name: postgresql_query - try to update not existing row
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: UPDATE test_table SET story = 'new' WHERE id = 100
register: result
ignore_errors: yes
- assert:
that:
- result is not changed
- result.query == "UPDATE test_table SET story = 'new' WHERE id = 100"
- result.rowcount == 0
- result.statusmessage == 'UPDATE 0'
- result.query_result == {}
# Simple insert query positional_args:
- name: postgresql_query - insert query
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: INSERT INTO test_table (id, story) VALUES (%s, %s)
positional_args:
- 4
- fourth
register: result
ignore_errors: yes
- assert:
that:
- result is changed
- result.query == "INSERT INTO test_table (id, story) VALUES (4, 'fourth')" or result.query == "INSERT INTO test_table (id, story) VALUES (4, E'fourth')"
- result.rowcount == 1
- result.statusmessage == 'INSERT 0 1'
- result.query_result == {}
# Truncate table:
- name: postgresql_query - truncate test_table
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: TRUNCATE test_table
register: result
ignore_errors: yes
- assert:
that:
- result is changed
- result.query == "TRUNCATE test_table"
- result.rowcount == 0
- result.statusmessage == 'TRUNCATE TABLE'
- result.query_result == {}
# Try DDL query:
- name: postgresql_query - alter test_table
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: ALTER TABLE test_table ADD COLUMN foo int
register: result
ignore_errors: yes
- assert:
that:
- result is changed
- result.query == "ALTER TABLE test_table ADD COLUMN foo int"
- result.rowcount == 0
- result.statusmessage == 'ALTER TABLE'
#############################
# Test autocommit parameter #
#############################
- name: postgresql_query - vacuum without autocommit must fail
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: VACUUM
register: result
ignore_errors: yes
- assert:
that:
- result.failed == true
- name: postgresql_query - autocommit in check_mode must fail
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: VACUUM
autocommit: yes
check_mode: yes
register: result
ignore_errors: yes
- assert:
that:
- result.failed == true
- result.msg == "Using autocommit is mutually exclusive with check_mode"
- name: postgresql_query - vacuum with autocommit
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: VACUUM
autocommit: yes
register: result
- assert:
that:
- result is changed
- result.query == "VACUUM"
- result.rowcount == 0
- result.statusmessage == 'VACUUM'
- result.query_result == {}
#
# Issue 59955
#
- name: postgresql_query - create test table for issue 59955
become_user: "{{ pg_user }}"
become: yes
postgresql_table:
login_user: "{{ pg_user }}"
login_db: postgres
name: test_array_table
columns:
- arr_col int[]
when: postgres_version_resp.stdout is version('9.4', '>=')
- set_fact:
my_list:
- 1
- 2
- 3
my_arr: '{1, 2, 3}'
when: postgres_version_resp.stdout is version('9.4', '>=')
- name: postgresql_query - insert array into test table by positional args
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
login_db: postgres
query: INSERT INTO test_array_table (arr_col) VALUES (%s)
positional_args:
- '{{ my_list }}'
register: result
when: postgres_version_resp.stdout is version('9.4', '>=')
- assert:
that:
- result is changed
- result.query == "INSERT INTO test_array_table (arr_col) VALUES ('{1, 2, 3}')"
when: postgres_version_resp.stdout is version('9.4', '>=')
- name: postgresql_query - select array from test table by passing positional_args
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
login_db: postgres
query: SELECT * FROM test_array_table WHERE arr_col = %s
positional_args:
- '{{ my_list }}'
register: result
when: postgres_version_resp.stdout is version('9.4', '>=')
- assert:
that:
- result is not changed
- result.query == "SELECT * FROM test_array_table WHERE arr_col = '{1, 2, 3}'"
- result.rowcount == 1
when: postgres_version_resp.stdout is version('9.4', '>=')
- name: postgresql_query - select array from test table by passing named_args
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
login_db: postgres
query: SELECT * FROM test_array_table WHERE arr_col = %(arr_val)s
named_args:
arr_val:
- '{{ my_list }}'
register: result
when: postgres_version_resp.stdout is version('9.4', '>=')
- assert:
that:
- result is not changed
- result.query == "SELECT * FROM test_array_table WHERE arr_col = '{1, 2, 3}'"
- result.rowcount == 1
when: postgres_version_resp.stdout is version('9.4', '>=')
- name: postgresql_query - select array from test table by passing positional_args as a string
become_user: "{{ pg_user }}"
become: yes
postgresql_query:
login_user: "{{ pg_user }}"
login_db: postgres
query: SELECT * FROM test_array_table WHERE arr_col = %s
positional_args:
- '{{ my_arr|string }}'
register: result
when: postgres_version_resp.stdout is version('9.4', '>=')
- assert:
that:
- result is not changed
- result.query == "SELECT * FROM test_array_table WHERE arr_col = '{1, 2, 3}'"
- result.rowcount == 1
when: postgres_version_resp.stdout is version('9.4', '>=')
- name: postgresql_query - clean up
become_user: "{{ pg_user }}"
become: yes
postgresql_table:
login_user: "{{ pg_user }}"
login_db: postgres
name: test_array_table
state: absent
when: postgres_version_resp.stdout is version('9.4', '>=')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,369 |
terraform plan does not provide output for 2.9.0
|
##### SUMMARY
Running terraform plan from ansible 2.9.0 does not provide expected output in stdout. This may be similar to issue #46589.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform
##### ANSIBLE VERSION
```
# ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
# ansible-config dump --only-changed
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_silent
```
##### OS / ENVIRONMENT
- Ubuntu 18.04.3 LTS
```
# uname -a
Linux 57f42cc031d6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# terraform --version
Terraform v0.12.13
# pip freeze
ansible==2.9.0
asn1crypto==0.24.0
cryptography==2.1.4
enum34==1.1.6
httplib2==0.9.2
idna==2.6
ipaddress==1.0.17
Jinja2==2.10
keyring==10.6.0
keyrings.alt==3.0
MarkupSafe==1.0
paramiko==2.0.0
pyasn1==0.4.2
pycrypto==2.6.1
pygobject==3.26.1
pyxdg==0.25
PyYAML==3.12
SecretStorage==2.3.1
six==1.11.0
# pip3 freeze
asn1crypto==0.24.0
awscli==1.16.272
boto==2.49.0
boto3==1.10.8
botocore==1.13.8
chardet==3.0.4
colorama==0.4.1
cryptography==2.1.4
docutils==0.15.2
idna==2.6
jmespath==0.9.4
keyring==10.6.0
keyrings.alt==3.0
pyasn1==0.4.7
pycrypto==2.6.1
pygobject==3.26.1
python-apt==1.6.4
python-dateutil==2.8.1
python-debian==0.1.32
pyxdg==0.25
PyYAML==5.1.2
rsa==3.4.2
s3transfer==0.2.1
SecretStorage==2.3.1
six==1.11.0
unattended-upgrades==0.1
urllib3==1.25.6
virtualenv==15.1.0
```
##### STEPS TO REPRODUCE
Run terraform plan from ansible
```
############
# Run terraform plan
- name: Run terraform plan for VPC resources
terraform:
state: planned
project_path: "{{ fileTerraformWorkingPath }}"
plan_file: "{{ fileTerraformWorkingPath }}/plan.tfplan"
force_init: yes
backend_config:
region: "{{ nameAWSRegion }}"
register: vpc_tf_stack
############
# Print information about the base VPC
- name: Display everything with terraform
debug:
var: vpc_tf_stack
- name: Display all terraform output for VPC
debug:
var: vpc_tf_stack.stdout
```
##### EXPECTED RESULTS
Terraform plan stdout is shown.
##### ACTUAL RESULTS
Nothing is shown in stdout.
```
TASK [planTerraform : Display everything with terraform] *************************************************************************************************************************************
ok: [127.0.0.1] => {
"vpc_tf_stack": {
"changed": false,
"command": "/usr/bin/terraform plan -input=false -no-color -detailed-exitcode -out /tmp/terraform-20191104141619238338988/plan.tfplan /tmp/terraform-20191104141619238338988/plan.tfplan",
"failed": false,
"outputs": {
"idAdminHostedZone": {
"sensitive": false,
"type": "string",
"value": "Z23423423423423423423"
},
"idExternalSecurityGroup": {
"sensitive": false,
"type": "string",
"value": "sg-12312312312312312"
},
"idIGW": {
"sensitive": false,
"type": "string",
"value": "igw-12312312312312312"
},
"idLocalHostedZone": {
"sensitive": false,
"type": "string",
"value": "Z12312312312312312312"
},
"idPublicRouteTable": {
"sensitive": false,
"type": "string",
"value": "rtb-12312312312312312"
},
"idVPC": {
"sensitive": false,
"type": "string",
"value": "vpc-12312312312312312"
}
},
"state": "planned",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"workspace": "default"
}
}
TASK [planTerraform : Display all terraform output for VPC] **********************************************************************************************************************************
ok: [127.0.0.1] => {
"vpc_tf_stack.stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/64369
|
https://github.com/ansible/ansible/pull/66322
|
73d343b7aa79030e026f90ad7e33d73921319ff4
|
62991a8bdd4599bfdcdb94e2fa1c3718bb4d3510
| 2019-11-04T03:28:10Z |
python
| 2020-01-15T04:12:58Z |
changelogs/fragments/66322-moved_line_causing_terraform_output_suppression.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,369 |
terraform plan does not provide output for 2.9.0
|
##### SUMMARY
Running terraform plan from ansible 2.9.0 does not provide expected output in stdout. This may be similar to issue #46589.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform
##### ANSIBLE VERSION
```
# ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
# ansible-config dump --only-changed
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_silent
```
##### OS / ENVIRONMENT
- Ubuntu 18.04.3 LTS
```
# uname -a
Linux 57f42cc031d6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# terraform --version
Terraform v0.12.13
# pip freeze
ansible==2.9.0
asn1crypto==0.24.0
cryptography==2.1.4
enum34==1.1.6
httplib2==0.9.2
idna==2.6
ipaddress==1.0.17
Jinja2==2.10
keyring==10.6.0
keyrings.alt==3.0
MarkupSafe==1.0
paramiko==2.0.0
pyasn1==0.4.2
pycrypto==2.6.1
pygobject==3.26.1
pyxdg==0.25
PyYAML==3.12
SecretStorage==2.3.1
six==1.11.0
# pip3 freeze
asn1crypto==0.24.0
awscli==1.16.272
boto==2.49.0
boto3==1.10.8
botocore==1.13.8
chardet==3.0.4
colorama==0.4.1
cryptography==2.1.4
docutils==0.15.2
idna==2.6
jmespath==0.9.4
keyring==10.6.0
keyrings.alt==3.0
pyasn1==0.4.7
pycrypto==2.6.1
pygobject==3.26.1
python-apt==1.6.4
python-dateutil==2.8.1
python-debian==0.1.32
pyxdg==0.25
PyYAML==5.1.2
rsa==3.4.2
s3transfer==0.2.1
SecretStorage==2.3.1
six==1.11.0
unattended-upgrades==0.1
urllib3==1.25.6
virtualenv==15.1.0
```
##### STEPS TO REPRODUCE
Run terraform plan from ansible
```
############
# Run terraform plan
- name: Run terraform plan for VPC resources
terraform:
state: planned
project_path: "{{ fileTerraformWorkingPath }}"
plan_file: "{{ fileTerraformWorkingPath }}/plan.tfplan"
force_init: yes
backend_config:
region: "{{ nameAWSRegion }}"
register: vpc_tf_stack
############
# Print information about the base VPC
- name: Display everything with terraform
debug:
var: vpc_tf_stack
- name: Display all terraform output for VPC
debug:
var: vpc_tf_stack.stdout
```
##### EXPECTED RESULTS
Terraform plan stdout is shown.
##### ACTUAL RESULTS
Nothing is shown in stdout.
```
TASK [planTerraform : Display everything with terraform] *************************************************************************************************************************************
ok: [127.0.0.1] => {
"vpc_tf_stack": {
"changed": false,
"command": "/usr/bin/terraform plan -input=false -no-color -detailed-exitcode -out /tmp/terraform-20191104141619238338988/plan.tfplan /tmp/terraform-20191104141619238338988/plan.tfplan",
"failed": false,
"outputs": {
"idAdminHostedZone": {
"sensitive": false,
"type": "string",
"value": "Z23423423423423423423"
},
"idExternalSecurityGroup": {
"sensitive": false,
"type": "string",
"value": "sg-12312312312312312"
},
"idIGW": {
"sensitive": false,
"type": "string",
"value": "igw-12312312312312312"
},
"idLocalHostedZone": {
"sensitive": false,
"type": "string",
"value": "Z12312312312312312312"
},
"idPublicRouteTable": {
"sensitive": false,
"type": "string",
"value": "rtb-12312312312312312"
},
"idVPC": {
"sensitive": false,
"type": "string",
"value": "vpc-12312312312312312"
}
},
"state": "planned",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"workspace": "default"
}
}
TASK [planTerraform : Display all terraform output for VPC] **********************************************************************************************************************************
ok: [127.0.0.1] => {
"vpc_tf_stack.stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/64369
|
https://github.com/ansible/ansible/pull/66322
|
73d343b7aa79030e026f90ad7e33d73921319ff4
|
62991a8bdd4599bfdcdb94e2fa1c3718bb4d3510
| 2019-11-04T03:28:10Z |
python
| 2020-01-15T04:12:58Z |
lib/ansible/modules/cloud/misc/terraform.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Ryan Scott Brown <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: terraform
short_description: Manages a Terraform deployment (and plans)
description:
- Provides support for deploying resources with Terraform and pulling
resource information back into Ansible.
version_added: "2.5"
options:
state:
choices: ['planned', 'present', 'absent']
description:
- Goal state of given stage/project
required: false
default: present
binary_path:
description:
- The path of a terraform binary to use, relative to the 'service_path'
unless you supply an absolute path.
required: false
project_path:
description:
- The path to the root of the Terraform directory with the
vars.tf/main.tf/etc to use.
required: true
workspace:
description:
- The terraform workspace to work with.
required: false
default: default
version_added: 2.7
purge_workspace:
description:
- Only works with state = absent
- If true, the workspace will be deleted after the "terraform destroy" action.
- The 'default' workspace will not be deleted.
required: false
default: false
type: bool
version_added: 2.7
plan_file:
description:
- The path to an existing Terraform plan file to apply. If this is not
specified, Ansible will build a new TF plan and execute it.
Note that this option is required if 'state' has the 'planned' value.
required: false
state_file:
description:
- The path to an existing Terraform state file to use when building plan.
If this is not specified, the default `terraform.tfstate` will be used.
- This option is ignored when plan is specified.
required: false
variables_file:
description:
- The path to a variables file for Terraform to fill into the TF
configurations.
required: false
variables:
description:
- A group of key-values to override template variables or those in
variables files.
required: false
targets:
description:
- A list of specific resources to target in this plan/application. The
resources selected here will also auto-include any dependencies.
required: false
lock:
description:
- Enable statefile locking, if you use a service that accepts locks (such
as S3+DynamoDB) to store your statefile.
required: false
type: bool
lock_timeout:
description:
- How long to maintain the lock on the statefile, if you use a service
that accepts locks (such as S3+DynamoDB).
required: false
force_init:
description:
- To avoid duplicating infra, if a state file can't be found this will
force a `terraform init`. Generally, this should be turned off unless
you intend to provision an entirely new Terraform deployment.
default: false
required: false
type: bool
backend_config:
description:
- A group of key-values to provide at init stage to the -backend-config parameter.
required: false
version_added: 2.7
notes:
- To just run a `terraform plan`, use check mode.
requirements: [ "terraform" ]
author: "Ryan Scott Brown (@ryansb)"
'''
EXAMPLES = """
# Basic deploy of a service
- terraform:
project_path: '{{ project_dir }}'
state: present
# Define the backend configuration at init
- terraform:
project_path: 'project/'
state: "{{ state }}"
force_init: true
backend_config:
region: "eu-west-1"
bucket: "some-bucket"
key: "random.tfstate"
"""
RETURN = """
outputs:
type: complex
description: A dictionary of all the TF outputs by their assigned name. Use `.outputs.MyOutputName.value` to access the value.
returned: on success
sample: '{"bukkit_arn": {"sensitive": false, "type": "string", "value": "arn:aws:s3:::tf-test-bukkit"}'
contains:
sensitive:
type: bool
returned: always
description: Whether Terraform has marked this value as sensitive
type:
type: str
returned: always
description: The type of the value (string, int, etc)
value:
returned: always
description: The value of the output as interpolated by Terraform
stdout:
type: str
description: Full `terraform` command stdout, in case you want to display it or examine the event log
returned: always
sample: ''
command:
type: str
description: Full `terraform` command built by this module, in case you want to re-run the command outside the module or debug a problem.
returned: always
sample: terraform apply ...
"""
import os
import json
import tempfile
import traceback
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils.basic import AnsibleModule
DESTROY_ARGS = ('destroy', '-no-color', '-force')
APPLY_ARGS = ('apply', '-no-color', '-input=false', '-auto-approve=true')
module = None
def preflight_validation(bin_path, project_path, variables_args=None, plan_file=None):
if project_path in [None, ''] or '/' not in project_path:
module.fail_json(msg="Path for Terraform project can not be None or ''.")
if not os.path.exists(bin_path):
module.fail_json(msg="Path for Terraform binary '{0}' doesn't exist on this host - check the path and try again please.".format(bin_path))
if not os.path.isdir(project_path):
module.fail_json(msg="Path for Terraform project '{0}' doesn't exist on this host - check the path and try again please.".format(project_path))
rc, out, err = module.run_command([bin_path, 'validate'] + variables_args, cwd=project_path, use_unsafe_shell=True)
if rc != 0:
module.fail_json(msg="Failed to validate Terraform configuration files:\r\n{0}".format(err))
def _state_args(state_file):
if state_file and os.path.exists(state_file):
return ['-state', state_file]
if state_file and not os.path.exists(state_file):
module.fail_json(msg='Could not find state_file "{0}", check the path and try again.'.format(state_file))
return []
def init_plugins(bin_path, project_path, backend_config):
command = [bin_path, 'init', '-input=false']
if backend_config:
for key, val in backend_config.items():
command.extend([
'-backend-config',
shlex_quote('{0}={1}'.format(key, val))
])
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to initialize Terraform modules:\r\n{0}".format(err))
def get_workspace_context(bin_path, project_path):
workspace_ctx = {"current": "default", "all": []}
command = [bin_path, 'workspace', 'list', '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.warn("Failed to list Terraform workspaces:\r\n{0}".format(err))
for item in out.split('\n'):
stripped_item = item.strip()
if not stripped_item:
continue
elif stripped_item.startswith('* '):
workspace_ctx["current"] = stripped_item.replace('* ', '')
else:
workspace_ctx["all"].append(stripped_item)
return workspace_ctx
def _workspace_cmd(bin_path, project_path, action, workspace):
command = [bin_path, 'workspace', action, workspace, '-no-color']
rc, out, err = module.run_command(command, cwd=project_path)
if rc != 0:
module.fail_json(msg="Failed to {0} workspace:\r\n{1}".format(action, err))
return rc, out, err
def create_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'new', workspace)
def select_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'select', workspace)
def remove_workspace(bin_path, project_path, workspace):
_workspace_cmd(bin_path, project_path, 'delete', workspace)
def build_plan(command, project_path, variables_args, state_file, targets, state, plan_path=None):
if plan_path is None:
f, plan_path = tempfile.mkstemp(suffix='.tfplan')
plan_command = [command[0], 'plan', '-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path]
for t in (module.params.get('targets') or []):
plan_command.extend(['-target', t])
plan_command.extend(_state_args(state_file))
rc, out, err = module.run_command(plan_command + variables_args, cwd=project_path, use_unsafe_shell=True)
if rc == 0:
# no changes
return plan_path, False, out, err, plan_command if state == 'planned' else command
elif rc == 1:
# failure to plan
module.fail_json(msg='Terraform plan could not be created\r\nSTDOUT: {0}\r\n\r\nSTDERR: {1}'.format(out, err))
elif rc == 2:
# changes, but successful
return plan_path, True, out, err, plan_command if state == 'planned' else command
module.fail_json(msg='Terraform plan failed with unexpected exit code {0}. \r\nSTDOUT: {1}\r\n\r\nSTDERR: {2}'.format(rc, out, err))
def main():
global module
module = AnsibleModule(
argument_spec=dict(
project_path=dict(required=True, type='path'),
binary_path=dict(type='path'),
workspace=dict(required=False, type='str', default='default'),
purge_workspace=dict(type='bool', default=False),
state=dict(default='present', choices=['present', 'absent', 'planned']),
variables=dict(type='dict'),
variables_file=dict(type='path'),
plan_file=dict(type='path'),
state_file=dict(type='path'),
targets=dict(type='list', default=[]),
lock=dict(type='bool', default=True),
lock_timeout=dict(type='int',),
force_init=dict(type='bool', default=False),
backend_config=dict(type='dict', default=None),
),
required_if=[('state', 'planned', ['plan_file'])],
supports_check_mode=True,
)
project_path = module.params.get('project_path')
bin_path = module.params.get('binary_path')
workspace = module.params.get('workspace')
purge_workspace = module.params.get('purge_workspace')
state = module.params.get('state')
variables = module.params.get('variables') or {}
variables_file = module.params.get('variables_file')
plan_file = module.params.get('plan_file')
state_file = module.params.get('state_file')
force_init = module.params.get('force_init')
backend_config = module.params.get('backend_config')
if bin_path is not None:
command = [bin_path]
else:
command = [module.get_bin_path('terraform', required=True)]
if force_init:
init_plugins(command[0], project_path, backend_config)
workspace_ctx = get_workspace_context(command[0], project_path)
if workspace_ctx["current"] != workspace:
if workspace not in workspace_ctx["all"]:
create_workspace(command[0], project_path, workspace)
else:
select_workspace(command[0], project_path, workspace)
if state == 'present':
command.extend(APPLY_ARGS)
elif state == 'absent':
command.extend(DESTROY_ARGS)
variables_args = []
for k, v in variables.items():
variables_args.extend([
'-var',
'{0}={1}'.format(k, v)
])
if variables_file:
variables_args.extend(['-var-file', variables_file])
preflight_validation(command[0], project_path, variables_args)
if module.params.get('lock') is not None:
if module.params.get('lock'):
command.append('-lock=true')
else:
command.append('-lock=false')
if module.params.get('lock_timeout') is not None:
command.append('-lock-timeout=%ds' % module.params.get('lock_timeout'))
for t in (module.params.get('targets') or []):
command.extend(['-target', t])
# we aren't sure if this plan will result in changes, so assume yes
needs_application, changed = True, False
if state == 'absent':
command.extend(variables_args)
elif state == 'present' and plan_file:
if any([os.path.isfile(project_path + "/" + plan_file), os.path.isfile(plan_file)]):
command.append(plan_file)
else:
module.fail_json(msg='Could not find plan_file "{0}", check the path and try again.'.format(plan_file))
else:
plan_file, needs_application, out, err, command = build_plan(command, project_path, variables_args, state_file,
module.params.get('targets'), state, plan_file)
command.append(plan_file)
out, err = '', ''
if needs_application and not module.check_mode and not state == 'planned':
rc, out, err = module.run_command(command, cwd=project_path)
# checks out to decide if changes were made during execution
if '0 added, 0 changed' not in out and not state == "absent" or '0 destroyed' not in out:
changed = True
if rc != 0:
module.fail_json(
msg="Failure when executing Terraform command. Exited {0}.\nstdout: {1}\nstderr: {2}".format(rc, out, err),
command=' '.join(command)
)
outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file)
rc, outputs_text, outputs_err = module.run_command(outputs_command, cwd=project_path)
if rc == 1:
module.warn("Could not get Terraform outputs. This usually means none have been defined.\nstdout: {0}\nstderr: {1}".format(outputs_text, outputs_err))
outputs = {}
elif rc != 0:
module.fail_json(
msg="Failure when getting Terraform outputs. "
"Exited {0}.\nstdout: {1}\nstderr: {2}".format(rc, outputs_text, outputs_err),
command=' '.join(outputs_command))
else:
outputs = json.loads(outputs_text)
# Restore the Terraform workspace found when running the module
if workspace_ctx["current"] != workspace:
select_workspace(command[0], project_path, workspace_ctx["current"])
if state == 'absent' and workspace != 'default' and purge_workspace is True:
remove_workspace(command[0], project_path, workspace)
module.exit_json(changed=changed, state=state, workspace=workspace, outputs=outputs, stdout=out, stderr=err, command=' '.join(command))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,590 |
Add pywinrm version detail on message encryption over HTTP
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We did not mention about the pywinrm version regarding message encryption over HTTP In the following document.
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/user_guide/windows_winrm.rst#winrm-encryption
So I suggest explaining that HTTP encryption needs pywinrm 0.3.0 or higher version.
https://github.com/diyan/pywinrm/blob/master/CHANGELOG.md#version-030
##### ISSUE TYPE
- Documentation Report
|
https://github.com/ansible/ansible/issues/65590
|
https://github.com/ansible/ansible/pull/65591
|
eab385e0061072f13e8ee1a5e2c90c7d292d8dbd
|
3bcb664497615bf2fca3e7f15187eebc8fd7abb6
| 2019-12-06T03:46:59Z |
python
| 2020-01-15T22:22:38Z |
docs/docsite/rst/user_guide/windows_winrm.rst
|
.. _windows_winrm:
Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
.. contents:: Topics
:local:
What is WinRM?
``````````````
WinRM is a management protocol used by Windows to remotely communicate with
another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
included in all recent Windows operating systems. Since Windows
Server 2012, WinRM has been enabled by default, but in most cases extra
configuration is required to use WinRM with Ansible.
Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
communicate with Windows servers over WinRM. It is not installed by default
with the Ansible package, but can be installed by running the following:
.. code-block:: shell
pip install "pywinrm>=0.3.0"
.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
where x matches the python minor version Ansible is running under.
.. Warning::
Using the ``winrm`` or ``psrp`` connection plugins in Ansible on MacOS in
the latest releases typically fail. This is a known problem that occurs
deep within the Python stack and cannot be changed by Ansible. The only
workaround today is to set the environment variable ``no_proxy=*`` and
avoid using Kerberos auth.
Authentication Options
``````````````````````
When connecting to a Windows host, there are several different options that can be used
when authenticating with an account. The authentication type may be set on inventory
hosts or groups with the ``ansible_winrm_transport`` variable.
The following matrix is a high level overview of the options:
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
+=============+================+===========================+=======================+=================+
| Basic | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Certificate | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Kerberos | No | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| NTLM | Yes | Yes | No | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| CredSSP | Yes | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
Basic
-----
Basic authentication is one of the simplest authentication options to use, but is
also the most insecure. This is because the username and password are simply
base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
Basic authentication is not enabled by default on a Windows host but can be
enabled by running the following in PowerShell::
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
Certificate
-----------
Certificate authentication uses certificates as keys similar to SSH key
pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
.. code-block:: yaml+jinja
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
Certificate authentication is not enabled by default on a Windows host but can
be enabled by running the following in PowerShell::
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
.. Note:: Encrypted private keys cannot be used as the urllib3 library that
is used by Ansible for WinRM does not support this functionality.
Generate a Certificate
++++++++++++++++++++++
A certificate must be generated before it can be mapped to a local user.
This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be
the best option to use when running in a domain environment. For more information,
see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
a certificate for authentication only works when being generated from a
Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
extract the private key from the PFX certificate to a PEM file for Ansible
to use.
To generate a certificate with ``OpenSSL``:
.. code-block:: shell
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
To generate a certificate with ``New-SelfSignedCertificate``:
.. code-block:: powershell
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
.. Note:: To convert the PFX file to a private key that pywinrm can use, run
the following command with OpenSSL
``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
Import a Certificate to the Certificate Store
+++++++++++++++++++++++++++++++++++++++++++++
Once a certificate has been generated, the issuing certificate needs to be
imported into the ``Trusted Root Certificate Authorities`` of the
``LocalMachine`` store, and the client certificate public key must be present
in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import("cert.pem")
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. Note:: If using ADCS to generate the certificate, then the issuing
certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import("cert.pem")
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
Mapping a Certificate to an Account
+++++++++++++++++++++++++++++++++++
Once the certificate has been imported, map it to the local user account::
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
the path of the private key.
NTLM
----
NTLM is an older authentication mechanism used by Microsoft that can support
both local and domain accounts. NTLM is enabled by default on the WinRM
service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than
``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption
protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in
the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
Kerberos
--------
Kerberos is the recommended authentication option to use when running in a
domain environment. Kerberos supports features like credential delegation and
message encryption over HTTP and is one of the more secure options that
is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be
used properly.
The following example shows host vars configured for Kerberos authentication:
.. code-block:: yaml+jinja
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: kerberos
As of Ansible version 2.3, the Kerberos ticket will be created based on
``ansible_user`` and ``ansible_password``. If running on an older version of
Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
ticket must already be obtained. See below for more details.
There are some extra host variables that can be set::
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
Installing the Kerberos Library
+++++++++++++++++++++++++++++++
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
.. code-block:: shell
# Via Yum (RHEL/Centos/Fedora)
yum -y install python-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Via Pkg (FreeBSD)
sudo pkg install security/krb5
# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux)
pacman -S krb5
Once the dependencies have been installed, the ``python-kerberos`` wrapper can
be install using ``pip``:
.. code-block:: shell
pip install pywinrm[kerberos]
.. note::
While Ansible has supported Kerberos auth through ``pywinrm`` for some
time, optional features or more secure options may only be available in
newer versions of the ``pywinrm`` and/or ``pykerberos`` libraries. It is
recommended you upgrade each version to the latest available to resolve
any warnings or errors. This can be done through tools like ``pip`` or a
system package manager like ``dnf``, ``yum``, ``apt`` but the package
names and versions available may differ between tools.
Configuring Host Kerberos
+++++++++++++++++++++++++
Once the dependencies have been installed, Kerberos needs to be configured so
that it can communicate with a domain. This configuration is done through the
``/etc/krb5.conf`` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
.. code-block:: ini
[realms]
Add the full domain name and the fully qualified domain names of the primary
and secondary Active Directory domain controllers. It should look something
like this:
.. code-block:: ini
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
In the section that starts with:
.. code-block:: ini
[domain_realm]
Add a line like the following for each domain that Ansible needs access for:
.. code-block:: ini
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
You can configure other settings in this file such as the default domain. See
`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
for more details.
Automatic Kerberos Ticket Management
++++++++++++++++++++++++++++++++++++
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
this process, a new ticket is created in a temporary credential cache for each
host. This is done before each task executes to minimize the chance of ticket
expiration. The temporary credential caches are deleted after each task
completes and will not interfere with the default credential cache.
To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
via the inventory.
Automatic ticket management requires a standard ``kinit`` binary on the control
host system path. To specify a different location or binary name, set the
``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
``kinit``-compatible binary.
Manual Kerberos Ticket Management
+++++++++++++++++++++++++++++++++
To manually manage Kerberos tickets, the ``kinit`` binary is used. To
obtain a new ticket the following command is used:
.. code-block:: shell
kinit [email protected]
.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
.. code-block:: shell
klist
To destroy all the tickets that have been acquired, use the following command:
.. code-block:: shell
kdestroy
Troubleshooting Kerberos
++++++++++++++++++++++++
Kerberos is reliant on a properly-configured environment to
work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* The forward and reverse DNS lookups are working properly in the domain. To
test this, ping the windows host by name and then use the ip address returned
with ``nslookup``. The same name should be returned when using ``nslookup``
on the IP address.
* The Ansible host's clock is synchronized with the domain controller. Kerberos
is time sensitive, and a little clock drift can cause the ticket generation
process to fail.
* Ensure that the fully qualified domain name for the domain is configured in
the ``krb5.conf`` file. To check this, run::
kinit -C [email protected]
klist
If the domain name returned by ``klist`` is different from the one requested,
an alias is being used. The ``krb5.conf`` file needs to be updated so that
the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
CredSSP
-------
CredSSP authentication is a newer authentication protocol that allows
credential delegation. This is achieved by encrypting the username and password
after authentication has succeeded and sending that to the server using the
CredSSP protocol.
Because the username and password are sent to the server to be used for double
hop authentication, ensure that the hosts that the Windows host communicates with are
not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports
message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
.. code-block:: yaml+jinja
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
There are some extra host variables that can be set as shown below::
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
CredSSP authentication is not enabled by default on a Windows host, but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Enable-WSManCredSSP -Role Server -Force
Installing CredSSP Library
++++++++++++++++++++++++++
The ``requests-credssp`` wrapper can be installed using ``pip``:
.. code-block:: bash
pip install pywinrm[credssp]
CredSSP and TLS 1.2
+++++++++++++++++++
By default the ``requests-credssp`` library is configured to authenticate over
the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended
for Server 2008 R2 and Windows 7).
* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
has no way of supporting TLS 1.2
See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
Windows host.
Set CredSSP Certificate
+++++++++++++++++++++++
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
another certificate.
.. Note:: This certificate configuration is independent of the WinRM listener
certificate. With CredSSP, message transport still occurs over the WinRM listener,
but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP::
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
Non-Administrator Accounts
``````````````````````````
WinRM is configured by default to only allow connections from accounts in the local
``Administrators`` group. This can be changed by running:
.. code-block:: powershell
winrm configSDDL default
This will display an ACL editor, where new users or groups may be added. To run commands
over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
enabled.
While non-administrative accounts can be used with WinRM, most typical server administration
tasks require some level of administrative access, so the utility is usually limited.
WinRM Encryption
````````````````
By default WinRM will fail to work when running over an unencrypted channel.
The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
(HTTPS) or using message level encryption. Using WinRM with TLS is the
recommended option as it works with all authentication options, but requires
a certificate to be created and used on the WinRM listener.
The ``ConfigureRemotingForAnsible.ps1`` creates a self-signed certificate and
creates the listener with that certificate. If in a domain environment, ADCS
can also create a certificate for the host that is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication
option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
the WinRM payload with their own encryption method before sending it to the
server. The message-level encryption is not used when running over HTTPS because the
encryption uses the more secure TLS protocol instead. If both transport and
message encryption is required, set ``ansible_winrm_message_encryption=always``
in the host vars.
A last resort is to disable the encryption requirement on the Windows host. This
should only be used for development and debugging purposes, as anything sent
from Ansible can be viewed, manipulated and also the remote session can completely
be taken over by anyone on the same network. To disable the encryption
requirement::
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
.. Note:: Do not disable the encryption check unless it is
absolutely required. Doing so could allow sensitive information like
credentials and files to be intercepted by others on the network.
Inventory Options
`````````````````
Ansible's Windows support relies on a few standard variables to indicate the
username, password, and connection type of the remote hosts. These variables
are most easily set up in the inventory, but can be set on the ``host_vars``/
``group_vars`` level.
When setting up the inventory, the following variables are required:
.. code-block:: yaml+jinja
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line via --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
Using the variables above, Ansible will connect to the Windows host with Basic
authentication through HTTPS. If ``ansible_user`` has a UPN value like
``[email protected]`` then the authentication option will automatically attempt
to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
``kerberos``.
The following custom inventory variables are also supported
for additional configuration of WinRM connections:
* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
the default while HTTP is ``5985``
* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
``https``) to use for the WinRM connection. Ansible uses ``https`` by default
unless ``ansible_port`` is ``5985``
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
Ansible uses ``/wsman`` by default
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
of the username after ``@`` by default
* ``ansible_winrm_transport``: Specify one or more authentication transport
options as a comma-separated list. By default, Ansible will use ``kerberos,
basic`` if the ``kerberos`` module is installed and a realm is defined,
otherwise it will be ``plaintext``
* ``ansible_winrm_server_cert_validation``: Specify the server certificate
validation mode (``ignore`` or ``validate``). Ansible defaults to
``validate`` on Python 2.7.9 and higher, which will result in certificate
validation errors against the Windows self-signed certificates. Unless
verifiable certificates have been configured on the WinRM listeners, this
should be set to ``ignore``
* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
WinRM operations, Ansible uses ``20`` by default
* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
uses ``30`` by default. Useful if there are intermittent network issues and
read timeout errors keep occurring
* ``ansible_winrm_message_encryption``: Specify the message encryption
operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
default. ``auto`` means message encryption is only used when
``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
message encryption. ``always`` means message encryption will always be used
and ``never`` means message encryption will never be used
* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
than the one used in the ``certifi`` module. See the HTTPS Certificate
Validation section for more details.
* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
the authentication library will try to send channel binding tokens to
mitigate against man in the middle attacks. This flag controls whether these
bindings will be sent or not (default: ``yes``).
* ``ansible_winrm_*``: Any additional keyword arguments supported by
``winrm.Protocol`` may be provided in place of ``*``
In addition, there are also specific variables that need to be set
for each authentication option. See the section on authentication above for more information.
.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
``ansible_port``. If using a version of Ansible prior to 2.0, the older
style (``ansible_ssh_*``) should be used instead. The shorter variables
are ignored, without warning, in older versions of Ansible.
.. Note:: ``ansible_winrm_message_encryption`` is different from transport
encryption done over TLS. The WinRM payload is still encrypted with TLS
when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
IPv6 Addresses
``````````````
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
is normally set in an inventory. Ansible will attempt to parse the address
using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you
would an IPv4 address or hostname:
.. code-block:: ini
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
.. Note:: The ipaddress library is only included by default in Python 3.x. To
use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
a backported package.
HTTPS Certificate Validation
````````````````````````````
As part of the TLS protocol, the certificate is validated to ensure the host
matches the subject and the client trusts the issuer of the server certificate.
When using a self-signed certificate or setting
``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
bypassed. While self signed certificates will always need the ``ignore`` flag,
certificates that have been issued from a certificate authority can still be
validated.
One of the more common ways of setting up a HTTPS listener in a domain
environment is to use Active Directory Certificate Service (AD CS). AD CS is
used to generate signed certificates from a Certificate Signing Request (CSR).
If the WinRM HTTPS listener is using a certificate that has been signed by
another authority, like AD CS, then Ansible can be set up to trust that
issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
certificate of the CA can be exported as a PEM encoded certificate. This
certificate can then be copied locally to the Ansible controller and used as a
source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each
entry is contained on a new line. To then use the custom CA chain as part of
the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
file. If this variable is not set, the default CA chain is used instead which
is located in the install path of the Python package
`certifi <https://github.com/certifi/python-certifi>`_.
.. Note:: Each HTTP call is done by the Python requests library which does not
use the systems built-in certificate store as a trust authority.
Certificate validation will fail if the server's certificate issuer is
only added to the system's truststore.
.. _winrm_tls12:
TLS 1.2 Support
```````````````
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
is used to encrypt the WinRM messages. TLS will automatically attempt to
negotiate the best protocol and cipher suite that is available to both the
client and the server. If a match cannot be found then Ansible will error out
with a message similar to::
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
Commonly this is when the Windows host has not been configured to support
TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
manually.
.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
Ansible from connecting to the Windows host. This means that Server 2008
cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following
command on the Ansible controller::
openssl s_client -connect <hostname>:5986
The output will contain information about the TLS session and the ``Protocol``
line will display the version that was negotiated::
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
If the host is returning ``TLSv1`` then it should be configured so that
TLS v1.2 is enable. You can do this by running the following PowerShell
script:
.. code-block:: powershell
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
The below Ansible tasks can also be used to enable TLS v1.2:
.. code-block:: yaml+jinja
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
There are other ways to configure the TLS protocols as well as the cipher
suites that are offered by the Windows host. One tool that can give you a GUI
to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
from Nartac Software.
Limitations
```````````
Due to the design of the WinRM protocol , there are a few limitations
when using WinRM that can cause issues when creating playbooks for Ansible.
These include:
* Credentials are not delegated for most authentication types, which causes
authentication errors when accessing network resources or installing certain
programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or
because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent
certain commands or executables from running.
* You cannot run a process that interacts with ``DPAPI``, which is used by some
installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
and access network resources
* Use ``become`` to bypass all WinRM restrictions and run a command as it would
locally. Unlike using an authentication transport like ``credssp``, this will
also remove the non-interactive restriction and API restrictions like WUA and
DPAPI
* Use a scheduled task to run a command which can be created with the
``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
restrictions but can only run a command and not modules.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Best practices advice
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.